Home >Technology peripherals >AI >In deep learning scientific research, how to efficiently manage code and experiments?

In deep learning scientific research, how to efficiently manage code and experiments?

PHPz
PHPzforward
2023-10-23 11:21:011247browse

Answer 1

Author: Ye Xiaofei
Link: https://www.zhihu.com/question/269707221/answer/2281374258

I used to When Mercedes-Benz was launched in North America, there was a period of time in order to test different structures and parameters. We could train more than a hundred different models in a week. To this end, I combined the practices of the company’s seniors and my own thoughts and summary I developed a set of efficient code experiment management methods and successfully helped the project to be implemented. Now I am sharing it with you here.

Use Yaml files to configure training parameters

I know that many open source repos like to use input argparse to transmit a lot of training and model-related parameters, which is actually very inefficient. On the one hand, it will be troublesome to manually enter a large number of parameters every time you train. If you directly change the default values ​​and then go to the code to change them, it will waste a lot of time. Here I recommend that you directly use a Yaml file to control all model and training-related parameters, and link the naming of the Yaml with the model name and timestamp , as is the case with the famous 3D point cloud detection library OpenPCDet Made as shown in this link below.

github.com/open-mmlab/OpenPCDet/blob/master/tools/cfgs/kitti_models/pointrcnn.yaml

I cut off part of the yaml file from the link given above, as follows As shown in the figure, this configuration file covers how to preprocess point clouds, the types of classification, as well as various parameters of the backbone, the selection of optimizer and loss (not shown in the figure, please see the link above for complete information). In other words, Basically all the factors that can affect your model are included in this file, and in the code, you only need to use a simple yaml.load() to put these All parameters are read into a dict. More importantly, this configuration file can be saved to the same folder as your checkpoint, so that you can use it directly for breakpoint training, finetune or direct testing. You can also use it for testing. It is very convenient to match the results with the corresponding parameters.

In deep learning scientific research, how to efficiently manage code and experiments?

Code modularization is very important

Some researchers like to over-couple the entire system when writing code, such as loss function and model When written together, this will often lead to an impact on the whole body. If you change a small piece, the entire subsequent interface will also change. Therefore, if the code is well modularized, it can save you a lot of time. General deep learning code can basically be divided into several large blocks (taking pytorch as an example): I/O module, preprocessing module, visualization module, model body (if a large model contains sub-models, a new class should be added), loss functions, post-processing, and concatenated in a training or test script. Another benefit of code modularization is that it facilitates you to define different parameters in yaml for easy reading. In addition, the importlib library is used in many mature codes. It allows you not to determine which model or sub-model to use during training in the code, but can be directly defined in yaml.

How to use Tensorboard and tqdm

I basically use these two libraries every time. Tensorboard can track the changes in the loss curve of your training very well, making it easier for you to judge whether the model is still converging and overfitting. If you are doing image related work, you can also put some visualization results on it. Many times, you only need to look at the convergence status of tensorboard to basically know how your model is doing. Is it necessary to spend time testing and finetune separately? Tqdm can help you track your training progress intuitively, making it easier for you to make early stops. .

Make full use of Github

Whether you are working on a collaborative development with multiple people or a solo project, I strongly recommend using Github (the company may use bitbucket, more or less) to record your code. For details, please refer to my answer:

As a graduate student, what scientific research tools do you think are useful?
https://www.zhihu.com/question/484596211/answer/2163122684

Record the experimental results

I usually save a general excel to record the experimental results, first The column is the yaml path corresponding to the model, the second column is the model training epoches, and the third column is the log of the test results. I usually automate this process. As long as the total excel path is given in the test script, it can be easily done using pandas. Get it done.

Answer 2

Author: Jason
Link: https://www.zhihu.com/question/269707221/answer/470576066

git management code has nothing to do with deep learning or scientific research. You must use version management tools when writing code. I personally feel that it is a matter of choice whether to use GitHub or not. After all, it is impossible for all the code in the company to be linked to external Git.

Let’s talk about a few things you need to pay attention to when writing code:

1. Try to use the config file to pass in the test parameters, and try to save the config with the same name as the log file.

On the one hand, passing in external parameters can avoid too many version modifications on git due to parameters. Since DL is not easy to debug, sometimes it is inevitable to use git to do code comparison;

On the other hand, after testing thousands of versions, I believe you will not know which model has which parameters. Good habits are very effective. In addition, try to provide default values ​​for newly added parameters to facilitate calling the old version of the config file.

2. Try to decouple different models

In the same project, good reusability is a very good programming habit, but in the rapidly developing DL coding , assuming that the project is task-driven, this may sometimes become a hindrance, so try to extract some reusable functions, and try to decouple different models into different files related to the model structure, but it will It will be more convenient for future updates. Otherwise, some seemingly beautiful designs will become useless after a few months.

3. While satisfying a certain degree of stability, regularly follow up on new versions of the framework

There is often an embarrassing situation. From the beginning to the end of a project, the framework has been updated several versions, and the new version There are some coveted features, but unfortunately some APIs have changed. Therefore, you can try to keep the framework version stable within the project. Try to consider the pros and cons of different versions before starting the project. Sometimes proper learning is necessary.

In addition, have a tolerant heart towards different frameworks.

4. A training session takes a long time. Don’t blindly start running experiments after coding. Personal experience provides debug mode to experiment with small data. More logs are a good choice.

5. Record the changes in model update performance, because you may need to go back and start again at any time.

Author: OpenMMLab
Link: https://www.zhihu.com/question/269707221/answer/2480772257
Source: Zhihu
Copyright belongs to the author. For commercial reprinting, please contact the author for authorization. For non-commercial reprinting, please indicate the source.

Hello, the questioner, the previous answer mentioned the use of Tensorboard, Weights&Biases, MLFlow, Neptune and other tools to manage experimental data. However, as more and more wheels are built for experimental management tools, the cost of learning the tools is getting higher and higher. How should we choose?

MMCV can satisfy all your fantasies, and you can switch tools by modifying the configuration file.

github.com/open-mmlab/mmcv

Tensorboard records experimental data:

Configuration file:

log_config = dict( interval=1, hooks=[ dict(type='TextLoggerHook'), dict(type='TensorboardLoggerHook') ])

TensorBoard data Visualization effect

In deep learning scientific research, how to efficiently manage code and experiments?

WandB recording experimental data

Configuration file

log_config = dict( interval=1, hooks=[ dict(type='TextLoggerHook'), dict(type='WandbLoggerHook') ])

Wandb data visualization effect

(You need to log in to wandb with python api in advance)

In deep learning scientific research, how to efficiently manage code and experiments?

Neptume records experimental data

Configuration file

log_config = dict( interval=1, hooks=[ dict(type='TextLoggerHook'), dict(type='NeptuneLoggerHook',  init_kwargs=dict(project='Your Neptume account/mmcv')) ])

Neptume Visualization

In deep learning scientific research, how to efficiently manage code and experiments?

mlflow records experimental data

Configuration file

log_config = dict( interval=1, hooks=[ dict(type='TextLoggerHook'), dict(type='MlflowLoggerHook') ])

MLFlow Visualization

In deep learning scientific research, how to efficiently manage code and experiments?

dvclive records experimental data

Configuration file

log_config = dict( interval=1, hooks=[ dict(type='TextLoggerHook'), dict(type='DvcliveLoggerHook') ])

Generated html file

In deep learning scientific research, how to efficiently manage code and experiments?

The above only uses the most basic functions of various experimental management tools. We can further modify the configuration file to unlock more postures.

Having MMCV is equivalent to having all the experimental management tools. If you were a tf boy before, you can choose the classic nostalgic style of TensorBoard; if you want to record all experimental data and experimental environment, you might as well try Wandb (Weights & Biases) or Neptume; if your device cannot be connected to the Internet, you can choose mlflow to Experimental data is saved locally, and there is always a tool suitable for you.

In addition, MMCV also has its own log management system, that is TextLoggerHook ! It will save all the information generated during the training process, such as device environment, data set, model initialization method, loss, metric and other information generated during training, to the local xxx.log file. You can review previous experimental data without using any tools.

Still wondering which experiment management tool to use? Still worried about the learning cost of various tools? Hurry up and get on board MMCV, and experience various tools painlessly with just a few lines of configuration files.

github.com/open-mmlab/mmcv

The above is the detailed content of In deep learning scientific research, how to efficiently manage code and experiments?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete