Thank you for participating in ICSHM-2022. So far we have received a number of questions. Here are answers to some typical questions that may help you finish the competition.
To do the camera calibration, the chessborad dimension is normally given but you did not provide the size of the chessboard.
The size of each small square is 25mm*25mm.
We found about 1000 images and seven videos from the project folder. Are those all for training or we need to separate some videos for testing.
The 1102 images in folder “imgs_trainset” are for training. The seven videos in folder “video” are just for testing.
Four camera locations in the Figure 1 seems unmatched to the video name, in the video, for example the video 1-1.mp4 seems corresponds to the camera 2 in the Figure 1, but video 1-2.mp4 corresponds to the camera 1.
A3
We are sorry that the locations of camera 1 and camera 2 in the Figure 1 were reversed by mistake. We check the data again and confirm that this mistake just happened in the Figure 1. The correspondence among folder “chessboard_imgs”, “point_pairs” and “video” is always correct.
The resolusion of seven videos are 1080*1920, but in the Exel file the points with physical positions have larger values, for instance in the file cam_2.xlsx, the largest pixel value is (2428, 548) which is larger than the video resolusion.
In the Excel files of folder “point_pairs”, the pixel_x, pixel_y are the pixel coordinates in the image (size: 2560×1440). That is described in the “Read_Me” file. The resolusion of seven videos are 1920*1080, so you should consider the difference of image size and transform the data to the same size.
What is the direction of the data_acceleration?
Single direction excitation test.
I see that in your annoucement for the competition, you said that nine groups of original videos are provided for training. However, in the data readme file, you only provide a file named imgs_trainset with images of ships. So this file of images contains all the data we could use for training our model, and the nine groups of video are the test videos?
The images in folder “imgs_trainset” are for training, while the nine groups of videos are the test videos.
Does the video have calibration checkerboard to check the displacement?
For this problem, there is no pre-calibrated checkerboard size, and the local pixel resolution can be calculated based on the geometric information of the structural components.
In the Data-driven modeling task, training data of 5 accelerations are provided, and noise is added to data_noised.mat file.The magnitude of this noise is not stated, is it constant over the course of data collection by the sensor ? Is the noise in data_noised_testset.mat the same as the noise in data_noised.mat?
The noise in data_noised_testset.mat is the same as the noise in data_noised.mat, and it’s constant over the course of data collection by the sensor.
In the task of damage identification, we need to use the given data for structural damage identification. However, is the training data of the five sensors provided in task b displacement or acceleration?
All training data of the five sensors provided are acceleration data.
It was stated in the document, Announcement-The 3rd International Competition for SHM.pdf (page 12) that The displacement, strain, and acceleration of all nodes are included in the training dataset and we can do Physics Informed Neural Network (PINNs). However, upon reviewing the real training dataset that was released on December 20, we have noticed that we are only provided with the dataset corresponding to measured output by accelerometers 1 to 5. This dataset may not be fully sufficient if we are to implement PINNs in our project.
We only give the acceleration data. Please complete the task according to the existing data.
In the announcement document, it was mentioned that the evaluation would be based on this formula: [(Training data * RME)/Max Training = score] which the lower score is better. we wonder that if we use the less training data, then we can get lower score but the prediction quality would be reduced, so what other kind of evaluation will be considered?
It is recommended to use all data for training.
In the forementioned document, It is also written that there is a FEM model but the model or the properties is not given. could you please guide us in this matter?
No finite element model is required for this task.
In the given document, it mentioned that the load types of training data are moving vehicle load, Gaussian noises and impulse loads, but in the newest submission guide it doesn't mentioned that what is the type of load of training data, could you please kindly make it clear?
The load type of training data is vehicle load.
In Figure 6, on page 9 of the provided readme the sensors are labeled as displacement transducers. Is this correct? Or as in part 1, the data is accelerometer data (As accelerometers are shown in the same locations in Figure 5). Further, what are the units of the data?
All data are accelerometer data, with the unit of m/s ^ 2.
Q8
Because the project 3 is made up of two relatively independent tasks,so the length (15 pages) of the given template for the essay may be too insufficient to get our working clearly explained. Hence, can we just have more content, exceeding the length forbid?
If necessary, you can expand your paper length according to your demand to make your writing more detailed and well-organized. But we still suggest that it should not exceed 25 pages just like a journal article.
版权所有 桥梁健康监测与振动控制研究室
Email: bridge-shm@tongji.edu.cn
地址:上海市四平路1239号同济大学桥梁馆719室 邮编:200092