The possibility of more intuitive human-machine interfaces has sparked the development of new visual technologies. The way humans interact with elements of their environment should not be limited to the screens of phones or computers. Other alternatives where a sensation of spatial freedom are under development. Projection systems, using continuous light on surrounding surfaces, represent a major area of exploration. The great level of development in artificial vision hardware and software tools enables the acquisition of data from the user and his/her environment, while in the background, a software can analyze in real time the variations of the scene without intervention of the user. This kind of data processing makes possible the integration between what the user is doing and seeing. The device proposed in this paper uses an arrangement of infrared sensors that capture the hand gestures from the user, and then points towards a projection surface in the user workspace. A gesture recognition software platform recreates the 3D environment of the user and analyzes the motion of the key points of the user hands. After obtaining these data, a process of comparison with previously established patterns determines if the user is performing some kind of preset command with his hands. If so, the system immediately converts this signal into a command that will be executed with the assistance of an Arduino platform. The embedded platform carries out a previously established protocol, in which, by making use of a mechatronic system of 2 degrees of freedom that supports a micro projector, it allows the user to adjust the position of the projected image over a surface, allowing the user to use a 360° virtual space. The main goal of this system is to generate an interface where the gestures of the hands of the user not only allow him/her to interact with software-level elements, but also with mechatronic components that may be physically present such as robots or home automation devices. Once the robotic assistant system proposed is in operation, its user will not have to refer to a physical screen to enter control commands; instead of that, a robotic assistant system, with more intuitive and natural motions, will facilitate the realization of the user commands.