2024/08/24 17:15:36 - mmengine - DEBUG - An `DeepSpeedStrategy` instance is built from registry, and its implementation can be found in xtuner.engine._strategy.deepspeed 2024/08/24 17:15:36 - mmengine - DEBUG - An `DeepSpeedStrategy` instance is built from registry, and its implementation can be found in xtuner.engine._strategy.deepspeed 2024/08/24 17:15:37 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] CUDA available: True MUSA available: False numpy_random_seed: 615690615 GPU 0,1: NVIDIA A100-SXM4-80GB CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.2, V12.2.140 GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 PyTorch: 2.3.1+cu121 PyTorch compiling details: PyTorch built with: - GCC 9.3 - C++ Version: 201703 - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX512 - CUDA Runtime 12.1 - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90 - CuDNN 8.9.2 - Magma 2.6.1 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, TorchVision: 0.18.1+cu121 OpenCV: 4.9.0 MMEngine: 0.10.3 Runtime environment: launcher: none randomness: {'seed': None, 'deterministic': False} cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None deterministic: False Distributed launcher: none Distributed training: False GPU number: 1 ------------------------------------------------------------ 2024/08/24 17:15:37 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] CUDA available: True MUSA available: False numpy_random_seed: 898422919 GPU 0,1: NVIDIA A100-SXM4-80GB CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.2, V12.2.140 GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 PyTorch: 2.3.1+cu121 PyTorch compiling details: PyTorch built with: - GCC 9.3 - C++ Version: 201703 - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX512 - CUDA Runtime 12.1 - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90 - CuDNN 8.9.2 - Magma 2.6.1 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, TorchVision: 0.18.1+cu121 OpenCV: 4.9.0 MMEngine: 0.10.3 Runtime environment: launcher: none randomness: {'seed': None, 'deterministic': False} cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None deterministic: False Distributed launcher: none Distributed training: False GPU number: 1 ------------------------------------------------------------ 2024/08/24 17:15:37 - mmengine - INFO - Config: accumulative_counts = 4 batch_size = 4 betas = ( 0.9, 0.999, ) custom_hooks = [ dict( tokenizer=dict( pretrained_model_name_or_path='/root/models/InternVL2_2B', trust_remote_code=True, type='transformers.AutoTokenizer.from_pretrained'), type='xtuner.engine.hooks.DatasetInfoHook'), ] data_path = '/root/data/screenshot_od/layout_ocr_multi.json' data_root = '/root/data/extracted_images' dataloader_num_workers = 4 default_hooks = dict( checkpoint=dict( by_epoch=False, interval=1000, max_keep_ckpts=-1, save_optimizer=False, type='mmengine.hooks.CheckpointHook'), logger=dict( interval=10, log_metric_by_epoch=False, type='mmengine.hooks.LoggerHook'), param_scheduler=dict(type='mmengine.hooks.ParamSchedulerHook'), sampler_seed=dict(type='mmengine.hooks.DistSamplerSeedHook'), timer=dict(type='mmengine.hooks.IterTimerHook')) env_cfg = dict( cudnn_benchmark=False, dist_cfg=dict(backend='nccl'), mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0)) image_folder = '/root/data/extracted_imagesscreenshot_od/images' launcher = 'none' llava_dataset = dict( data_paths='/root/data/screenshot_od/layout_ocr_multi.json', image_folders='/root/data/extracted_imagesscreenshot_od/images', max_length=8192, model_path='/root/models/InternVL2_2B', template='xtuner.utils.PROMPT_TEMPLATE.internlm2_chat', type='xtuner.dataset.InternVL_V1_5_Dataset') load_from = None log_level = 'DEBUG' log_processor = dict(by_epoch=False) lr = 2e-05 max_epochs = 4 max_length = 8192 max_norm = 1 model = dict( freeze_llm=True, freeze_visual_encoder=True, llm_lora=dict( lora_alpha=256, lora_dropout=0.05, r=128, target_modules=None, task_type='CAUSAL_LM', type='peft.LoraConfig'), model_path='/root/models/InternVL2_2B', quantization_llm=True, quantization_vit=False, type='xtuner.model.InternVL_V1_5') optim_type = 'torch.optim.AdamW' optim_wrapper = dict( optimizer=dict( betas=( 0.9, 0.999, ), lr=2e-05, type='torch.optim.AdamW', weight_decay=0.1), type='DeepSpeedOptimWrapper') param_scheduler = [ dict( begin=0, by_epoch=True, convert_to_iter_based=True, end=0.12, start_factor=1e-05, type='mmengine.optim.LinearLR'), dict( begin=0.12, by_epoch=True, convert_to_iter_based=True, end=4, eta_min=0.0, type='mmengine.optim.CosineAnnealingLR'), ] path = '/root/models/InternVL2_2B' prompt_template = 'xtuner.utils.PROMPT_TEMPLATE.internlm2_chat' randomness = dict(deterministic=False, seed=None) resume = False runner_type = 'FlexibleRunner' save_steps = 1000 save_total_limit = -1 strategy = dict( config=dict( bf16=dict(enabled=True), fp16=dict(enabled=False, initial_scale_power=16), gradient_accumulation_steps='auto', gradient_clipping='auto', train_micro_batch_size_per_gpu='auto', zero_allow_untested_optimizer=True, zero_force_ds_cpu_optimizer=False, zero_optimization=dict(overlap_comm=True, stage=2)), exclude_frozen_parameters=True, gradient_accumulation_steps=4, gradient_clipping=1, sequence_parallel_size=1, train_micro_batch_size_per_gpu=4, type='xtuner.engine.DeepSpeedStrategy') tokenizer = dict( pretrained_model_name_or_path='/root/models/InternVL2_2B', trust_remote_code=True, type='transformers.AutoTokenizer.from_pretrained') train_cfg = dict(max_epochs=4, type='xtuner.engine.runner.TrainLoop') train_dataloader = dict( batch_size=4, collate_fn=dict(type='xtuner.dataset.collate_fns.default_collate_fn'), dataset=dict( data_paths='/root/data/screenshot_od/layout_ocr_multi.json', image_folders='/root/data/extracted_imagesscreenshot_od/images', max_length=8192, model_path='/root/models/InternVL2_2B', template='xtuner.utils.PROMPT_TEMPLATE.internlm2_chat', type='xtuner.dataset.InternVL_V1_5_Dataset'), num_workers=4, sampler=dict( length_property='modality_length', per_device_batch_size=16, type='xtuner.dataset.samplers.LengthGroupedSampler')) visualizer = dict( type='mmengine.visualization.Visualizer', vis_backends=[ dict(type='mmengine.visualization.TensorboardVisBackend'), ]) warmup_ratio = 0.03 weight_decay = 0.1 work_dir = '/root/wangqun/work_dirs/internvl_ft_run_11_filter' 2024/08/24 17:15:37 - mmengine - DEBUG - An `TensorboardVisBackend` instance is built from registry, and its implementation can be found in mmengine.visualization.vis_backend 2024/08/24 17:15:37 - mmengine - INFO - Config: accumulative_counts = 4 batch_size = 4 betas = ( 0.9, 0.999, ) custom_hooks = [ dict( tokenizer=dict( pretrained_model_name_or_path='/root/models/InternVL2_2B', trust_remote_code=True, type='transformers.AutoTokenizer.from_pretrained'), type='xtuner.engine.hooks.DatasetInfoHook'), ] data_path = '/root/data/screenshot_od/layout_ocr_multi.json' data_root = '/root/data/extracted_images' dataloader_num_workers = 4 default_hooks = dict( checkpoint=dict( by_epoch=False, interval=1000, max_keep_ckpts=-1, save_optimizer=False, type='mmengine.hooks.CheckpointHook'), logger=dict( interval=10, log_metric_by_epoch=False, type='mmengine.hooks.LoggerHook'), param_scheduler=dict(type='mmengine.hooks.ParamSchedulerHook'), sampler_seed=dict(type='mmengine.hooks.DistSamplerSeedHook'), timer=dict(type='mmengine.hooks.IterTimerHook')) env_cfg = dict( cudnn_benchmark=False, dist_cfg=dict(backend='nccl'), mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0)) image_folder = '/root/data/extracted_imagesscreenshot_od/images' launcher = 'none' llava_dataset = dict( data_paths='/root/data/screenshot_od/layout_ocr_multi.json', image_folders='/root/data/extracted_imagesscreenshot_od/images', max_length=8192, model_path='/root/models/InternVL2_2B', template='xtuner.utils.PROMPT_TEMPLATE.internlm2_chat', type='xtuner.dataset.InternVL_V1_5_Dataset') load_from = None log_level = 'DEBUG' log_processor = dict(by_epoch=False) lr = 2e-05 max_epochs = 4 max_length = 8192 max_norm = 1 model = dict( freeze_llm=True, freeze_visual_encoder=True, llm_lora=dict( lora_alpha=256, lora_dropout=0.05, r=128, target_modules=None, task_type='CAUSAL_LM', type='peft.LoraConfig'), model_path='/root/models/InternVL2_2B', quantization_llm=True, quantization_vit=False, type='xtuner.model.InternVL_V1_5') optim_type = 'torch.optim.AdamW' optim_wrapper = dict( optimizer=dict( betas=( 0.9, 0.999, ), lr=2e-05, type='torch.optim.AdamW', weight_decay=0.1), type='DeepSpeedOptimWrapper') param_scheduler = [ dict( begin=0, by_epoch=True, convert_to_iter_based=True, end=0.12, start_factor=1e-05, type='mmengine.optim.LinearLR'), dict( begin=0.12, by_epoch=True, convert_to_iter_based=True, end=4, eta_min=0.0, type='mmengine.optim.CosineAnnealingLR'), ] path = '/root/models/InternVL2_2B' prompt_template = 'xtuner.utils.PROMPT_TEMPLATE.internlm2_chat' randomness = dict(deterministic=False, seed=None) resume = False runner_type = 'FlexibleRunner' save_steps = 1000 save_total_limit = -1 strategy = dict( config=dict( bf16=dict(enabled=True), fp16=dict(enabled=False, initial_scale_power=16), gradient_accumulation_steps='auto', gradient_clipping='auto', train_micro_batch_size_per_gpu='auto', zero_allow_untested_optimizer=True, zero_force_ds_cpu_optimizer=False, zero_optimization=dict(overlap_comm=True, stage=2)), exclude_frozen_parameters=True, gradient_accumulation_steps=4, gradient_clipping=1, sequence_parallel_size=1, train_micro_batch_size_per_gpu=4, type='xtuner.engine.DeepSpeedStrategy') tokenizer = dict( pretrained_model_name_or_path='/root/models/InternVL2_2B', trust_remote_code=True, type='transformers.AutoTokenizer.from_pretrained') train_cfg = dict(max_epochs=4, type='xtuner.engine.runner.TrainLoop') train_dataloader = dict( batch_size=4, collate_fn=dict(type='xtuner.dataset.collate_fns.default_collate_fn'), dataset=dict( data_paths='/root/data/screenshot_od/layout_ocr_multi.json', image_folders='/root/data/extracted_imagesscreenshot_od/images', max_length=8192, model_path='/root/models/InternVL2_2B', template='xtuner.utils.PROMPT_TEMPLATE.internlm2_chat', type='xtuner.dataset.InternVL_V1_5_Dataset'), num_workers=4, sampler=dict( length_property='modality_length', per_device_batch_size=16, type='xtuner.dataset.samplers.LengthGroupedSampler')) visualizer = dict( type='mmengine.visualization.Visualizer', vis_backends=[ dict(type='mmengine.visualization.TensorboardVisBackend'), ]) warmup_ratio = 0.03 weight_decay = 0.1 work_dir = '/root/wangqun/work_dirs/internvl_ft_run_11_filter' 2024/08/24 17:15:37 - mmengine - DEBUG - An `TensorboardVisBackend` instance is built from registry, and its implementation can be found in mmengine.visualization.vis_backend 2024/08/24 17:15:37 - mmengine - DEBUG - An `Visualizer` instance is built from registry, and its implementation can be found in mmengine.visualization.visualizer 2024/08/24 17:15:37 - mmengine - DEBUG - Attribute `_env_initialized` is not defined in or `._env_initialized is False, `_init_env` will be called and ._env_initialized will be set to True 2024/08/24 17:15:37 - mmengine - DEBUG - An `Visualizer` instance is built from registry, and its implementation can be found in mmengine.visualization.visualizer 2024/08/24 17:15:37 - mmengine - DEBUG - Attribute `_env_initialized` is not defined in or `._env_initialized is False, `_init_env` will be called and ._env_initialized will be set to True 2024/08/24 17:15:39 - mmengine - DEBUG - Get class `RuntimeInfoHook` from "hook" registry in "mmengine" 2024/08/24 17:15:39 - mmengine - DEBUG - An `RuntimeInfoHook` instance is built from registry, and its implementation can be found in mmengine.hooks.runtime_info_hook 2024/08/24 17:15:39 - mmengine - DEBUG - An `IterTimerHook` instance is built from registry, and its implementation can be found in mmengine.hooks.iter_timer_hook 2024/08/24 17:15:39 - mmengine - DEBUG - An `DistSamplerSeedHook` instance is built from registry, and its implementation can be found in mmengine.hooks.sampler_seed_hook 2024/08/24 17:15:39 - mmengine - DEBUG - An `LoggerHook` instance is built from registry, and its implementation can be found in mmengine.hooks.logger_hook 2024/08/24 17:15:39 - mmengine - DEBUG - An `ParamSchedulerHook` instance is built from registry, and its implementation can be found in mmengine.hooks.param_scheduler_hook 2024/08/24 17:15:39 - mmengine - DEBUG - An `CheckpointHook` instance is built from registry, and its implementation can be found in mmengine.hooks.checkpoint_hook 2024/08/24 17:15:39 - mmengine - WARNING - Failed to search registry with scope "mmengine" in the "builder" registry tree. As a workaround, the current "builder" registry in "xtuner" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmengine" is a correct scope, or whether the registry is initialized. 2024/08/24 17:15:39 - mmengine - DEBUG - Get class `RuntimeInfoHook` from "hook" registry in "mmengine" 2024/08/24 17:15:39 - mmengine - DEBUG - An `RuntimeInfoHook` instance is built from registry, and its implementation can be found in mmengine.hooks.runtime_info_hook 2024/08/24 17:15:39 - mmengine - DEBUG - An `IterTimerHook` instance is built from registry, and its implementation can be found in mmengine.hooks.iter_timer_hook 2024/08/24 17:15:39 - mmengine - DEBUG - An `DistSamplerSeedHook` instance is built from registry, and its implementation can be found in mmengine.hooks.sampler_seed_hook 2024/08/24 17:15:39 - mmengine - DEBUG - An `LoggerHook` instance is built from registry, and its implementation can be found in mmengine.hooks.logger_hook 2024/08/24 17:15:39 - mmengine - DEBUG - An `ParamSchedulerHook` instance is built from registry, and its implementation can be found in mmengine.hooks.param_scheduler_hook 2024/08/24 17:15:39 - mmengine - DEBUG - An `CheckpointHook` instance is built from registry, and its implementation can be found in mmengine.hooks.checkpoint_hook 2024/08/24 17:15:39 - mmengine - WARNING - Failed to search registry with scope "mmengine" in the "builder" registry tree. As a workaround, the current "builder" registry in "xtuner" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmengine" is a correct scope, or whether the registry is initialized. 2024/08/24 17:15:39 - mmengine - DEBUG - An `from_pretrained` instance is built from registry, and its implementation can be found in transformers.models.auto.tokenization_auto 2024/08/24 17:15:39 - mmengine - DEBUG - An `DatasetInfoHook` instance is built from registry, and its implementation can be found in xtuner.engine.hooks.dataset_info_hook 2024/08/24 17:15:39 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DatasetInfoHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) DatasetInfoHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_val: (VERY_HIGH ) RuntimeInfoHook -------------------- after_train: (VERY_HIGH ) RuntimeInfoHook (VERY_LOW ) CheckpointHook -------------------- before_test: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) DatasetInfoHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test: (VERY_HIGH ) RuntimeInfoHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- 2024/08/24 17:15:39 - mmengine - DEBUG - An `FlexibleRunner` instance is built from registry, its implementation can be found inmmengine.runner._flexible_runner 2024/08/24 17:15:39 - mmengine - DEBUG - An `from_pretrained` instance is built from registry, and its implementation can be found in transformers.models.auto.tokenization_auto 2024/08/24 17:15:39 - mmengine - DEBUG - An `DatasetInfoHook` instance is built from registry, and its implementation can be found in xtuner.engine.hooks.dataset_info_hook 2024/08/24 17:15:39 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DatasetInfoHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) DatasetInfoHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_val: (VERY_HIGH ) RuntimeInfoHook -------------------- after_train: (VERY_HIGH ) RuntimeInfoHook (VERY_LOW ) CheckpointHook -------------------- before_test: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) DatasetInfoHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test: (VERY_HIGH ) RuntimeInfoHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- 2024/08/24 17:15:39 - mmengine - DEBUG - An `FlexibleRunner` instance is built from registry, its implementation can be found inmmengine.runner._flexible_runner 2024/08/24 17:15:39 - mmengine - INFO - Starting to loading data and calc length 2024/08/24 17:15:39 - mmengine - INFO - =======Starting to process /root/data/screenshot_od/layout_ocr_multi.json ======= 2024/08/24 17:15:39 - mmengine - INFO - Starting to loading data and calc length 2024/08/24 17:15:39 - mmengine - INFO - =======Starting to process /root/data/screenshot_od/layout_ocr_multi.json ======= 2024/08/24 17:15:46 - mmengine - INFO - =======total 4806 samples of /root/data/screenshot_od/layout_ocr_multi.json======= 2024/08/24 17:15:46 - mmengine - INFO - end loading data and calc length 2024/08/24 17:15:46 - mmengine - INFO - =======total 4806 samples======= 2024/08/24 17:15:46 - mmengine - DEBUG - An `InternVL_V1_5_Dataset` instance is built from registry, and its implementation can be found in xtuner.dataset.internvl_dataset 2024/08/24 17:15:46 - mmengine - INFO - LengthGroupedSampler is used. 2024/08/24 17:15:46 - mmengine - INFO - LengthGroupedSampler construction is complete, and the selected attribute is modality_length 2024/08/24 17:15:46 - mmengine - DEBUG - An `LengthGroupedSampler` instance is built from registry, and its implementation can be found in xtuner.dataset.samplers.length_grouped 2024/08/24 17:15:46 - mmengine - WARNING - Dataset InternVL_V1_5_Dataset has no metainfo. ``dataset_meta`` in visualizer will be None. 2024/08/24 17:15:46 - mmengine - INFO - =======total 4806 samples of /root/data/screenshot_od/layout_ocr_multi.json======= 2024/08/24 17:15:46 - mmengine - INFO - end loading data and calc length 2024/08/24 17:15:46 - mmengine - INFO - =======total 4806 samples======= 2024/08/24 17:15:46 - mmengine - DEBUG - An `InternVL_V1_5_Dataset` instance is built from registry, and its implementation can be found in xtuner.dataset.internvl_dataset 2024/08/24 17:15:46 - mmengine - INFO - LengthGroupedSampler is used. 2024/08/24 17:15:46 - mmengine - INFO - LengthGroupedSampler construction is complete, and the selected attribute is modality_length 2024/08/24 17:15:46 - mmengine - DEBUG - An `LengthGroupedSampler` instance is built from registry, and its implementation can be found in xtuner.dataset.samplers.length_grouped 2024/08/24 17:15:46 - mmengine - WARNING - Dataset InternVL_V1_5_Dataset has no metainfo. ``dataset_meta`` in visualizer will be None. 2024/08/24 17:15:46 - mmengine - DEBUG - An `TrainLoop` instance is built from registry, and its implementation can be found in xtuner.engine.runner.loops 2024/08/24 17:15:46 - mmengine - INFO - Start to load InternVL_V1_5 model. 2024/08/24 17:15:46 - mmengine - DEBUG - Get class `BaseDataPreprocessor` from "model" registry in "mmengine" 2024/08/24 17:15:46 - mmengine - DEBUG - An `BaseDataPreprocessor` instance is built from registry, and its implementation can be found in mmengine.model.base_model.data_preprocessor 2024/08/24 17:15:46 - mmengine - DEBUG - An `TrainLoop` instance is built from registry, and its implementation can be found in xtuner.engine.runner.loops 2024/08/24 17:15:46 - mmengine - INFO - Start to load InternVL_V1_5 model. 2024/08/24 17:15:46 - mmengine - DEBUG - Get class `BaseDataPreprocessor` from "model" registry in "mmengine" 2024/08/24 17:15:46 - mmengine - DEBUG - An `BaseDataPreprocessor` instance is built from registry, and its implementation can be found in mmengine.model.base_model.data_preprocessor 2024/08/24 17:15:56 - mmengine - DEBUG - An `LoraConfig` instance is built from registry, and its implementation can be found in peft.tuners.lora.config 2024/08/24 17:15:56 - mmengine - DEBUG - An `LoraConfig` instance is built from registry, and its implementation can be found in peft.tuners.lora.config 2024/08/24 17:15:57 - mmengine - INFO - InternVL_V1_5( (data_preprocessor): BaseDataPreprocessor() (model): InternVLChatModel( (vision_model): InternVisionModel( (embeddings): InternVisionEmbeddings( (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14)) ) (encoder): InternVisionEncoder( (layers): ModuleList( (0-23): 24 x InternVisionEncoderLayer( (attn): InternAttention( (qkv): Linear(in_features=1024, out_features=3072, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=1024, out_features=1024, bias=True) ) (mlp): InternMLP( (act): GELUActivation() (fc1): Linear(in_features=1024, out_features=4096, bias=True) (fc2): Linear(in_features=4096, out_features=1024, bias=True) ) (norm1): LayerNorm((1024,), eps=1e-06, elementwise_affine=True) (norm2): LayerNorm((1024,), eps=1e-06, elementwise_affine=True) (drop_path1): Identity() (drop_path2): Identity() ) ) ) ) (language_model): PeftModelForCausalLM( (base_model): LoraModel( (model): InternLM2ForCausalLM( (model): InternLM2Model( (tok_embeddings): Embedding(92553, 2048, padding_idx=2) (layers): ModuleList( (0-23): 24 x InternLM2DecoderLayer( (attention): InternLM2Attention( (wqkv): lora.Linear( (base_layer): Linear4bit(in_features=2048, out_features=4096, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (wo): lora.Linear( (base_layer): Linear4bit(in_features=2048, out_features=2048, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=2048, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (rotary_emb): InternLM2DynamicNTKScalingRotaryEmbedding() ) (feed_forward): InternLM2MLP( (w1): lora.Linear( (base_layer): Linear4bit(in_features=2048, out_features=8192, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=8192, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (w3): lora.Linear( (base_layer): Linear4bit(in_features=2048, out_features=8192, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=8192, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (w2): lora.Linear( (base_layer): Linear4bit(in_features=8192, out_features=2048, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=8192, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=2048, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (act_fn): SiLU() ) (attention_norm): InternLM2RMSNorm() (ffn_norm): InternLM2RMSNorm() ) ) (norm): InternLM2RMSNorm() ) (output): lora.Linear( (base_layer): Linear4bit(in_features=2048, out_features=92553, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=92553, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) ) ) ) (mlp1): Sequential( (0): LayerNorm((4096,), eps=1e-05, elementwise_affine=True) (1): Linear(in_features=4096, out_features=2048, bias=True) (2): GELU(approximate='none') (3): Linear(in_features=2048, out_features=2048, bias=True) ) ) ) 2024/08/24 17:15:57 - mmengine - INFO - InternVL_V1_5 construction is complete 2024/08/24 17:15:57 - mmengine - DEBUG - An `InternVL_V1_5` instance is built from registry, and its implementation can be found in xtuner.model.internvl 2024/08/24 17:15:57 - mmengine - DEBUG - Get class `DefaultOptimWrapperConstructor` from "optimizer wrapper constructor" registry in "mmengine" 2024/08/24 17:15:57 - mmengine - DEBUG - An `DefaultOptimWrapperConstructor` instance is built from registry, and its implementation can be found in mmengine.optim.optimizer.default_constructor 2024/08/24 17:15:57 - mmengine - DEBUG - An `AdamW` instance is built from registry, and its implementation can be found in torch.optim.adamw 2024/08/24 17:15:57 - mmengine - DEBUG - Get class `DeepSpeedOptimWrapper` from "optim_wrapper" registry in "mmengine" 2024/08/24 17:15:57 - mmengine - DEBUG - An `DeepSpeedOptimWrapper` instance is built from registry, and its implementation can be found in mmengine._strategy.deepspeed 2024/08/24 17:15:57 - mmengine - INFO - InternVL_V1_5( (data_preprocessor): BaseDataPreprocessor() (model): InternVLChatModel( (vision_model): InternVisionModel( (embeddings): InternVisionEmbeddings( (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14)) ) (encoder): InternVisionEncoder( (layers): ModuleList( (0-23): 24 x InternVisionEncoderLayer( (attn): InternAttention( (qkv): Linear(in_features=1024, out_features=3072, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=1024, out_features=1024, bias=True) ) (mlp): InternMLP( (act): GELUActivation() (fc1): Linear(in_features=1024, out_features=4096, bias=True) (fc2): Linear(in_features=4096, out_features=1024, bias=True) ) (norm1): LayerNorm((1024,), eps=1e-06, elementwise_affine=True) (norm2): LayerNorm((1024,), eps=1e-06, elementwise_affine=True) (drop_path1): Identity() (drop_path2): Identity() ) ) ) ) (language_model): PeftModelForCausalLM( (base_model): LoraModel( (model): InternLM2ForCausalLM( (model): InternLM2Model( (tok_embeddings): Embedding(92553, 2048, padding_idx=2) (layers): ModuleList( (0-23): 24 x InternLM2DecoderLayer( (attention): InternLM2Attention( (wqkv): lora.Linear( (base_layer): Linear4bit(in_features=2048, out_features=4096, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (wo): lora.Linear( (base_layer): Linear4bit(in_features=2048, out_features=2048, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=2048, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (rotary_emb): InternLM2DynamicNTKScalingRotaryEmbedding() ) (feed_forward): InternLM2MLP( (w1): lora.Linear( (base_layer): Linear4bit(in_features=2048, out_features=8192, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=8192, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (w3): lora.Linear( (base_layer): Linear4bit(in_features=2048, out_features=8192, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=8192, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (w2): lora.Linear( (base_layer): Linear4bit(in_features=8192, out_features=2048, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=8192, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=2048, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (act_fn): SiLU() ) (attention_norm): InternLM2RMSNorm() (ffn_norm): InternLM2RMSNorm() ) ) (norm): InternLM2RMSNorm() ) (output): lora.Linear( (base_layer): Linear4bit(in_features=2048, out_features=92553, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=128, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=128, out_features=92553, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) ) ) ) (mlp1): Sequential( (0): LayerNorm((4096,), eps=1e-05, elementwise_affine=True) (1): Linear(in_features=4096, out_features=2048, bias=True) (2): GELU(approximate='none') (3): Linear(in_features=2048, out_features=2048, bias=True) ) ) ) 2024/08/24 17:15:57 - mmengine - INFO - InternVL_V1_5 construction is complete 2024/08/24 17:15:57 - mmengine - DEBUG - An `InternVL_V1_5` instance is built from registry, and its implementation can be found in xtuner.model.internvl 2024/08/24 17:15:57 - mmengine - DEBUG - Get class `DefaultOptimWrapperConstructor` from "optimizer wrapper constructor" registry in "mmengine" 2024/08/24 17:15:57 - mmengine - DEBUG - An `DefaultOptimWrapperConstructor` instance is built from registry, and its implementation can be found in mmengine.optim.optimizer.default_constructor 2024/08/24 17:15:57 - mmengine - DEBUG - An `AdamW` instance is built from registry, and its implementation can be found in torch.optim.adamw 2024/08/24 17:15:57 - mmengine - DEBUG - Get class `DeepSpeedOptimWrapper` from "optim_wrapper" registry in "mmengine" 2024/08/24 17:15:57 - mmengine - DEBUG - An `DeepSpeedOptimWrapper` instance is built from registry, and its implementation can be found in mmengine._strategy.deepspeed 2024/08/24 17:15:59 - mmengine - DEBUG - The `end` of is not set. Use the max epochs/iters of train loop as default. 2024/08/24 17:15:59 - mmengine - DEBUG - The `end` of is not set. Use the max epochs/iters of train loop as default. 2024/08/24 17:15:59 - mmengine - INFO - Num train samples 4806 2024/08/24 17:15:59 - mmengine - INFO - train example: 2024/08/24 17:15:59 - mmengine - DEBUG - The `end` of is not set. Use the max epochs/iters of train loop as default. 2024/08/24 17:15:59 - mmengine - DEBUG - The `end` of is not set. Use the max epochs/iters of train loop as default. 2024/08/24 17:16:00 - mmengine - INFO - Num train samples 4806 2024/08/24 17:16:00 - mmengine - INFO - train example: 2024/08/24 17:16:00 - mmengine - INFO - <|im_start|> system You are an AI assistant whose name is InternLM (书生·浦语).<|im_end|><|im_start|>user 请从这张聊天截图中提取结构化信息<|im_end|><|im_start|> assistant { "dialog_name": "<对方正在输入...", "conversation": [ { "timestamp": "", "speaker": "<对方正在输入...", "content": "不是", "message_bbox": { "min_x": 917, "max_x": 989, "min_y": 253, "max_y": 289 }, "image": "", "transfer": [], "file": [] }, { "timestamp": "", "speaker": "<对方正在输入...", "content": "在淘宝里", "message_bbox": { "min_x": 839, "max_x": 987, "min_y": 370, "max_y": 404 }, "image": "", "transfer": [], "file": [] }, { "timestamp": "", "speaker": "<对方正在输入...", "content": "不能发微信", "message_bbox": { "min_x": 801, "max_x": 989, "min_y": 485, "max_y": 521 }, "image": "", "transfer": [], "file": [] }, { "timestamp": "", "speaker": "<对方正在输入...", "content": "两字", "message_bbox": { "min_x": 915, "max_x": 988, "min_y": 601, "max_y": 637 }, "image": "", "transfer": [], "file": [] }, { "timestamp": "", "speaker": "<对方正在输入...", "content": "微信", "message_bbox": { "min_x": 916, "max_x": 990, "min_y": 718, "max_y": 753 }, "image": "", "transfer": [], "file": [] }, { "timestamp": "", "speaker": "<对方正在输入...", "content": "①微信", "message_bbox": { "min_x": 845, "max_x": 988, "min_y": 833, "max_y": 869 }, "image": "", "transfer": [], "file": [] } ] }<|im_end|> 2024/08/24 17:16:00 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 2024/08/24 17:16:00 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 2024/08/24 17:16:00 - mmengine - INFO - Checkpoints will be saved to /root/wangqun/work_dirs/internvl_ft_run_11_filter. 2024/08/24 17:16:00 - mmengine - INFO - <|im_start|> system You are an AI assistant whose name is InternLM (书生·浦语).<|im_end|><|im_start|>user 请从这张聊天截图中提取结构化信息<|im_end|><|im_start|> assistant { "dialog_name": "<对方正在输入...", "conversation": [ { "timestamp": "", "speaker": "<对方正在输入...", "content": "不是", "message_bbox": { "min_x": 917, "max_x": 989, "min_y": 253, "max_y": 289 }, "image": "", "transfer": [], "file": [] }, { "timestamp": "", "speaker": "<对方正在输入...", "content": "在淘宝里", "message_bbox": { "min_x": 839, "max_x": 987, "min_y": 370, "max_y": 404 }, "image": "", "transfer": [], "file": [] }, { "timestamp": "", "speaker": "<对方正在输入...", "content": "不能发微信", "message_bbox": { "min_x": 801, "max_x": 989, "min_y": 485, "max_y": 521 }, "image": "", "transfer": [], "file": [] }, { "timestamp": "", "speaker": "<对方正在输入...", "content": "两字", "message_bbox": { "min_x": 915, "max_x": 988, "min_y": 601, "max_y": 637 }, "image": "", "transfer": [], "file": [] }, { "timestamp": "", "speaker": "<对方正在输入...", "content": "微信", "message_bbox": { "min_x": 916, "max_x": 990, "min_y": 718, "max_y": 753 }, "image": "", "transfer": [], "file": [] }, { "timestamp": "", "speaker": "<对方正在输入...", "content": "①微信", "message_bbox": { "min_x": 845, "max_x": 988, "min_y": 833, "max_y": 869 }, "image": "", "transfer": [], "file": [] } ] }<|im_end|> 2024/08/24 17:16:00 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 2024/08/24 17:16:00 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 2024/08/24 17:16:00 - mmengine - INFO - Checkpoints will be saved to /root/wangqun/work_dirs/internvl_ft_run_11_filter.