A state-of-the-art framework, i.e., deep deterministic policy gradient (DDPG), has obtained a certain effect in the robotic control field. When the wheeled mobile robot (WMR) executes operation in unstructured environment, it is critical to endow the WMR with the capacity to avoid the static and dynamic obstacles. Thus, a obstacle avoidance algorithm based on DDPG is proposed to realize the autonomous navigation in the unknown environment. The WMR in this study installs the requisite sensors to provide the fully observable environment information at any moment. The continuous state space description for WMR and obstacles is designed, together with the reward mechanism and action space. The learning agent. i.e., the studied mobile robot, utilizes the DDPG model, through the continuous interaction with the surrounding environment and the application of historical experience data, the WMR can learn the optimal action behavior. Simulation along with test works strongly verify the collision-free ability in static and dynamic scenarios with multiple observable obstacles.