Powered hand prostheses with many degrees of freedom are moving from research into the market for prosthetics. In order to make use of the prostheses' full functionality, it is essential to study efficient ways of high dimensional myoelectric control. Human subjects can rapidly learn to employ electromyographic (EMG) activity of several hand and arm muscles to control the position of a cursor on a computer screen, even if the muscle-cursor map contradicts directions in which the muscles would act naturally. But can a similar control scheme be translated into real-time operation of a dexterous robotic hand? We found that despite different degrees of freedom in the effector output, the learning process for controlling a robotic hand was surprisingly similar to that for a virtual two-dimensional cursor. Control signals were derived from the EMG in two different ways, with a linear and a Bayesian filter, to test how stable user intentions could be conveyed through them. Our analysis indicates that without visual feedback, control accuracy benefits from filters that reject high EMG amplitudes. In summary, we conclude that findings on myoelectric control principles, studied in abstract, virtual tasks can be transferred to real-life prosthetic applications.