MATLAB英文材料-基于matlab的仿真(含中文翻译).doc

上传人:风**** 文档编号:950519 上传时间:2024-03-18 格式:DOC 页数:16 大小:491.50KB
下载 相关 举报
MATLAB英文材料-基于matlab的仿真(含中文翻译).doc_第1页
第1页 / 共16页
MATLAB英文材料-基于matlab的仿真(含中文翻译).doc_第2页
第2页 / 共16页
MATLAB英文材料-基于matlab的仿真(含中文翻译).doc_第3页
第3页 / 共16页
MATLAB英文材料-基于matlab的仿真(含中文翻译).doc_第4页
第4页 / 共16页
MATLAB英文材料-基于matlab的仿真(含中文翻译).doc_第5页
第5页 / 共16页
点击查看更多>>
资源描述

1、A Matlab-based Simulator for Autonomous Mobile Robots AbstractMatlab is a powerful software development tool and can dramatically reduce the programming workload during the period of algorithm development and theory research. Unfortunately, most of commercial robot simulators do not support Matlab.

2、This paper presents a Matlab-based simulator for algorithm development of 2D indoor robot navigation. It provides a simple user interface for constructing robot models and indoor environment models, including visual observations for the algorithms to be tested. Experimental results are presented to

3、show the feasibility and performance of the proposed simulator. Keywords: Mobile robot, Navigation, Simulator, Matlab 1. Introduction Navigation is the essential ability that a mobile robot. During the development of new navigation algorithms, it is necessary to test them in simulated robots and env

4、ironments before the testing on real robots and the real world. This is because (i) the prices of robots are expansive; (ii) the untested algorithm may damage the robot during the experiment; (iii) difficulties on the construction and alternation of system models under noise background; (iv) the tra

5、nsient state is difficult to track precisely; and (v) the measurements to the external beacons are hidden during the experiment, but this information is often helpful for debugging and updating the algorithms. The software simulator could be a good solution for these problems. A good simulator could

6、 provide many different environments to help the researchers to find out problems in their algorithms in different kinds of mobile robots. In order to solve the problems listed above, this simulator is supposed to be able to monitor system states closely. It also should have flexible and friendly us

7、ers interface to develop all kinds of algorithms. Up to now, many commercial simulators with good performance have been developed. For instance, MOBOTSIM is a 2D simulator for windows, which provides a graphic interface to build environments 1. But it only supports limited robot models (differential

8、 driven robots with distance sensors only), and is unable to deal with on visual based algorithms. Bugworks is a very simple simulator providing drag-and-place interface 2; but it provides very primitive functions and is more like a demonstration rather than a simulator. Some other robot simulators,

9、 such as Ropsim 3, ThreeDimSim 5, and RPG Kinematix 6, are not specially designed for the development of autonomous navigation algorithms of mobile robots and have very limited functions. Among all the commercial simulators, Webot from Cyberbotics 4 and MRS from Microsoft are powerful and better per

10、formed simulators for mobile robot navigation. Both simulators, i.e. Webots and MRS, provide powerful interfaces to build mobile robots and environments, excellent 3-D display, accurate performance simulation, and programming languages for robot control. Perhaps due to the powerful functions, they a

11、re difficult to use for a new user. For instance, it is quite a boring job to build an environment for visual utilities, which involves shapes building, materials selection, and illumination design. Moreover, some robot development kits have built-in simulator for some special kinds of robots. Aria

12、from Activmedia has a 2-D indoor simulator for Pioneer mobile robots 8. The simulator adopts feasible text files to configure the environment, but only support limited robot models. However, the majority of commercial simulators are not currently supporting On the other hand, Matlab programming that

13、 provides a good support in matrix computing, image processing, fuzzy logic, neural network, etc., and can dramatically reduce the coding time in the research stage of new navigation algorithms. For example, a matrix inverse operation may needs a function which has hundreds of lines; but there is a

14、simple command in Matlab. To use Matlab in this stage can avoid time-wasting on regenerating existed algorithms repeatedly and focus on the new theory and algorithm development. This paper presents a Matlab-based simulator that is fully compatible with Matlab codes, and makes it possible for robotic

15、s researchers to debug their code and do experiments conveniently at the first stage of their research. The algorithms development is based on Matlab subroutines with appointed parameter variables, which are stored in a file to be accessed by the simulator. Using this simulator, we can build the env

16、ironment, select parameters, build subroutines and display outputs on the screen. Data are recorded during the whole procedure; some basic analyses are also performed. The rest of the paper is organized as follows. The software structure of the proposed simulator is explained in Section II. Section

17、III describes the user interface of the proposed simulator. Some experimental results are given in Section IV to show the system performance. Finally, Section V presents a brief conclusion and potential future work. 2. Software architecture To make algorithm design and debugging easier, our Matlab b

18、ased simulator has been designed to have the following functions: l Easy environment model-building; including walls, obstacles, beacons and visual scenes; l Robot model building, including the driving and control system and noise level. l Observation model setting; the simulator calculates the imag

19、e frame that the robot can see, according to the precise robot pose, the parameters of camera, and the environment. l Bumping reaction simulation. If the robot touches “walls”, the simulator can stop the robot even when it is commanded to move forward by other modules. This function prevents the rob

20、ot passing through the “wall” like a ghost, and makes the simulation running like the experiment on real robots. l Real-time display of the running processing and observations. This is for users to track the navigation procedure and find out the bugs. l Statistical results of the whole running proce

21、dure, including the transient and average localization error. This is detailed navigation result for offline analysis. Some basic and simple analysis has been done in these modules. The architecture shown in Fig. 1 has been developed to implement the functions above. The rest of this section will ex

22、plain the modules of the simulator in details. 2.1. User Interface The simulator provides an interface to build the environment, set the noise model; and a few separate subroutines are available for users to implement observation and localization algorithms. Some parameters and settings are defined

23、by users, the interface modules and files can obtain these definition. As shown in Fig. 1, the modules above the dashed line are the user interface. Using Customer Configure files, users can describe environments (the walls, corridors, doorways, the obstacles and the Beacons), explain system and con

24、trol models, define noises in different steps and do some simulator settings. The Customer Subroutines should be a serious of source codes with required input/output parameters. The simulator calls these subroutines and uses the results to control the mobile robot. The algorithms in the Customer Sub

25、routines are therefore tested in the system defined by Customer Configure Files (CCFs) in the simulator. The grey blocks in Fig. 1 are the Customer Subroutines integrated in the simulator. Fig. 1 Software structure of the simulatorThe environment is described by a configure file in which the corners

26、 of walls are indicated by the Cartesian value pairs. Each pair defines a point in the environment and the Program Configure module connects points with straight lines in serials and regards these lines as the wall. Each beacon is defined by a four-element vector as x,y,P T, where (x,y) indict the b

27、eacons position by Cartesian values, is the direction that the beacon faces to, and P is a pointer refer to an image file reflect the venue views in front of a beacon. For a non-visual beacon, e.g. a reflective polar for a laser scanner, the element P is evaluated, which is illegal for am image poin

28、ter. Some parameters are evaluated in a CCF, such as the data of the robot (shape, radius, driving method, wheelbases, maximum translation and rotation speeds, noises, etc.), the observing character (maximum and minimum observing ranges, observing angles, observing noises, etc.) and so on. These dat

29、a are used by inner modules to build system and observation models. The robot and environment drawn in the real time video also rely on these parameters. The CCF also defines some setting related to the simulation running, e.g. the modes of robot motion and tracking display, the switches of observin

30、g display, the strategy of random motion, etc. 2.2. Behaviour controlling modules A navigation algorithm normally consists of few modules such as obstacle avoidance, route planner and localization (and mapping if not given manually). Although obstacle avoidance module (OAM, safety module) is importa

31、nt, it is not discussed in this paper. The simulator provides a built-in OAM for users so that they can focus on their algorithms. But the simulator also allows users to switch off this function and build their own OAM in one of the customer subroutines. A bumping reaction function is also integrate

32、d in this module which is always turned on even the OAM has been switched off. Without this function, the robot could go through the wall like a ghost if the user switched off the OAM and the robot has some bugs in the program. The OAM has the flowchart shown in Fig. 2. The robot pose is expressed a

33、s X = x y where x, y, and indicate the Cartesian coordinates and the orientation respectively. The (x, y) pair is therefore adopted to calculate the distance to the “wall” line segments by using the basic theory of analytic geometry. The users navigation algorithm is presented as the Matlab function

34、 to be tested, which is called by the OAM. It should output the driving information defined by the robot model, for example, the left and right wheel speed for a differentially driving robot. Fig. 2 Obstacle avoidance module2.3. Data fusion subroutines The data fusion is another subroutine of the si

35、mulator, which is available to users. The simulator also provides all the information required and receives the output of this subroutine, such as, the localization result and the mapping data. Normally, the robot acquires data using its onboard sensors, such as internal odometers, external sonars,

36、CCD cameras, etc. In the simulator, these sensor data should be transferred to the subroutine as close to that of a real robot as possible. Thus the observation simulation module (OSM) is developed. The internal data includes the precise pose plus the noises generated with the parameters set by CCFs

37、, which is easy to acquire. According to the true robot pose and the arrangement of the beacons, it is easy to deduce the beacons that can be detected by the robot, as well as the distance and direction of the observation. The information of all observed non-visual beacon will be selected according

38、to the CCFs and transferred to the data fuse subroutines. For visual based algorithms simulation, the CCFs of the environment contain the image files of the scenes at different places. Combined with the camera parameters defined in CCFs, the beacon orientation and the observing data such as distance

39、 and direction, the OSM can calculate and generate zoomed images to simulate the observations at a certain position. The user observation subroutines are therefore acquired the image just like that from an onboard camera in the real world. 2.4. Simulator output module The “Video Demo & Data Result”

40、is the output module of the simulator. The real time video gives the direct view of how the algorithm performs, while the output data give the precise record during the simulation. Fig. 3(a) shows a frame of the real time video, i.e. the whole view, while Fig. 3(b) is the enlarged view of the middle

41、 part of Fig. 3(a). (a) Whole view (b) Enlarged partFig. 3 The view of the simulatorRoutine Refresh: calculation the current drawing all parameters = getParameter(configure_file); Rob=BuildRobot(robot_Para); Wall = BuildWall(wall_Para); Beacon = getBeacons(beacon_Para); DrawRobot(Rob,0,0,0,0,0,0); D

42、rawImage(Wall, Beacon); loop for every 40 ms control = call(users_Control); tpose = getTruePose(control); obv = getObservation(tpose, beacons); lpose,l_noise = call(users_Localization); map,m_noise = call(users_mapbuilding); DrawRobot(Rob,tpose,lpose); DrawImage(l_noise, obv, map, m_noise); end loop

43、Fig. 4 The output videoThe wide straight lines denote the walls of the environment; the round on the left in Fig. 3(b) is the real position of the robot and the one on the right is the localization result. The thin straight lines are the feature observation at the certain moment, and the ellipses wi

44、th crosses at the centres express the uncertainties of mapping. The ellipse around the centre of the localization result means the uncertainty of the localization. The source code about the plotting is based on Baileys open source 7. It should be noticed that the output data contains the estimated p

45、ose, true pose, covariance matrixes of each step, which can be processed and evaluated precisely after the experiment. The Video is actually implemented by quick update of a serial of static images. In every 40 milliseconds, the simulator calculate all the state parameters, such as the true pose as

46、the localization result of the robot, the current observations and the current mapping result. The simulator draws the image for the current frame with these data and refreshes the output image. Since the image is refreshed 25 times per second, it looks like a real video. The calculation and drawing

47、 of current frame is implemented with the method shown in Fig. 4 In each loop cycle, the DrawRobot function translates and rotates the shape stored in the vector Rob, according to the true pose and localization results respectively, and draws the results with different fill shadows or colours. Durin

48、g the processing cycles in Fig. 4, all data and parameters, e.g. the lpose, t_pose, map, etc, are recorded by another thread in a file. After the navigation, these data will be output as well as some basic statistic results. 3. Experimental result The purpose of the experiment is to test the perform

49、ance of the simulator. Therefore, the experiment is designed to test the functional module of the simulator separately and then run a real SLAM algorithm in the simulator to test the overall performance. First of all, the OAM is switched off, and the users navigation module can only provide a constant speed on both wheels. In other words, the

展开阅读全文
相关资源
相关搜索
资源标签

当前位置:首页 > 学术论文 > 毕业论文

版权声明:以上文章中所选用的图片及文字来源于网络以及用户投稿,由于未联系到知识产权人或未发现有关知识产权的登记,如有知识产权人并不愿意我们使用,如有侵权请立即联系:2622162128@qq.com ,我们立即下架或删除。

Copyright© 2022-2024 www.wodocx.com ,All Rights Reserved |陕ICP备19002583号-1 

陕公网安备 61072602000132号     违法和不良信息举报:0916-4228922