Biblio

Found 84 results
[ Author(Desc)] Title Type Year
Filters: First Letter Of Last Name is R  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
R
Rana MAsif, Usman Z, Shareef Z.  2011.  Automatic control of ball and beam system using Particle Swarm Optimization. IEEE 12th International Symposium on Computational Intelligence and Informatics.
Rayyes R, Donat H, Steil JJ.  2020.  Hierarchical Interest-Driven Goal Babbling for Efficient Bootstrapping of Sensorimotor skills. ICRA . :1336-1342.
Rayyes R, Donat H, Steil JJ.  2021.  Interest-Driven Exploration for Real Robot Applications: Sample-Efficiency, High-Accuracy, and Robustness.
Rayyes R, Kubus D, Steil JJ.  2018.  Learning Inverse Statics Models Efficiently with Symmetry-Based Exploration. Frontiers in Neurorobotics.
Rayyes R, Donat H, Steil JJ, Spranger M.  2021.  Interest-Driven Exploration with Observational Learning for Developmental Robots. IEEE Transactions on Cognitive and Developmental Systems .
Rayyes R, Kubus D, Steil JJ.  2018.  Multi-Stage Goal Babbling for Learning Inverse Models Simultaneously.. IROS workshop.
Rayyes R, Steil JJ.  2019.  Online Associative Multi-Stage Goal Babbling Toward Versatile Learning of Sensorimotor Skills. Int. Conference Developmental Learning. :327-334.
Rayyes R, Donat H, Steil JJ.  2022.  Efficient Online Interest-Driven Exploration for Developmental Robots. IEEE Trans. Cognitive and Developmental Systems. 14 (4):1367-1377.
Rayyes R, Steil JJ.  2016.  Goal Babbling with direction sampling for simultaneous exploration and learning of inverse kinematics of a humanoid robot. Proceedings of the workshop on New Challenges in Neural Computation. 4:56–63.
Rayyes R, Kubus D, Hartmann C, Steil JJ.  2017.  Learning Inverse Statics Models Efficiently. arXiv.
Rayyes R.  2021.  Efficient and Stable Online Learning for Developmental Robots. PhD Thesis - Dr.-Ing
Reichler A-K, Gabriel F, Timmann F, Steil JJ, Dröder K.  2019.  An architecture for AutomationML-based constraint modelling and orchestration of Incremental Manufacturing. 7th CIRP Global Web Conference.
Reimer E..  2000.  Test 4 biblio - nach downgrade auf 1.2.
Reinhart F, Steil JJ.  2015.  Efficient Policy Search in Low-dimensional Embedding Spaces by Generalizing Motion Primitives with a Parameterized Skill Memory. Autonomous Robots. 38:331–348.
Reinhart F, Lemme A, Steil JJ.  2012.  Representation and Generalization of Bi-manual Skills from Kinesthetic Teaching. IEEE-RAS International Conference on Humanoid Robots. :560–567.
Reinhart F, Steil JJ.  2011.  Neural learning and dynamical selection of redundant solutions for inverse kinematic control. Proc. IEEE Int. Conf. Humanoid Robots. :564–569.
Reinhart F, Shareef Z, Steil JJ.  2017.  Hybrid Analytical and Data-driven Modeling for Feed-forward Robot Control. Sensors. 17(2)
Reinhart F, Steil JJ.  2008.  Recurrent neural associative learning of forward and inverse kinematics for movement generation of the redundant PA-10 robot. Int. Symp. Learning Adaptive Behavior in Robotic Systems, best paper award. 1:35–40.
Reinhart F, Steil JJ.  2011.  Reservoir regularization stabilizes learning of Echo State Networks with output feedback. Proc. European Symposium on Artificial Neural Networks. :59–64.
Reinhart F, Steil JJ.  2009.  Reaching movement generation with a recurrent neural network based on learning inverse kinematics for the humanoid robot iCub. IEEE Conf. Humanoid Robotics. :323–330.
Reinhart F, Steil JJ.  2014.  Efficient Policy Search with a Parameterized Skill Memory. :1400–1407.
Reinhart F, Steil JJ.  2009.  Goal-directed movement generation with a transient-based recurrent neural network controller. Advanced Technologies for Enhanced Quality of Life. :112–117.
Reinhart F, Steil JJ.  2012.  Regularization and stability in reservoir networks with output feedback. Neurocomputing. 90:96–105.
Reinhart F, Steil JJ.  2016.  Hybrid Mechanical and Data-driven Modeling Improves Inverse Kinematic Control of a Soft Robot. Procedia Technology. 26:12–19.
Reinhart F, Steil JJ.  2011.  State prediction: a constructive method to program recurrent neural networks. Artificial Neural Networks and Machine Learning – ICANN 2011 : 21st International Conference on Artificial Neural Networks, Espoo, Finland, June 14-17, 2011, Proceedings, Part I. 6791:159–166.

Pages