About City
  1. Schools and departments
  2. Education
  3. Rector and President
  4. Governance and leadership
  5. Working at City
  6. History of City
  7. More about City
  8. Campus map
  9. Contact us
  10. Coronavirus guidance
  1. Mathematics, Computer Science and Engineering
  2. Research
  3. Robotics, Autonomy and Machine Intelligence
About City

Robotics, Autonomy and Machine Intelligence Group

Robotics, Autonomy and Machine Intelligence (RAMI) Group led by Prof Nabil Aouf is dedicated to fulfill customer’s ambitious and innovative requirements.The group steadily establishes itself as a world leader in a range of applications from automatic sensing to processing and AI Autonomy for Data Processing, Robotics and Unmanned Vehicles to platforms Navigation/Control and Cyber Physical Systems.

Robotics and Autonomous Systems (RAS) Research

Vehicles Research (Air, Ground, Maritime and Space)     Systems (Non Vehicles) Research

Research Areas include:

1. Navigation, Guidance and Control
2.Real time Imaging and Embedded Vision based systems
3. Planning and Re-planning (Decision Making)
4.Data Fusion and Mining (for Detection, Recognition and Tracking
5.Human-Machine Interface/Augmented-Virtual Reality
6.Autonomous Cyber (Cyber Soft/Cyber Physical)

Our Research Philosophy

  • To provide Intelligence to Robotics and Autonomous Systems (RAS)
  • To provide low level (to mid level) TRL solutions originating from real world problems
  • To concern application from Civilian to Defense

Applications of Interest

Our applications of interest include (but not limited to):

Space and Aerospace Autonomous Systems

Digital Healthcare Technologies

Industrial Autonomous Robotics (Construction, Manufacturing, Farming)

Autonomy in Defense and Security Applications

Mobility (including Autonomous Cars) and Smart Cities

Examples of what we do

Navigation, Guidance and Control

1- VSLAM/Visual (and LIDAR) Odometry Navigation

Extending the capability of RAS to navigate in unchartered or partially chartered territories without a GPS requirement. The techniques developed offer to the unmanned platform accurate self-localization and own 2D/3D Map creation by Visual, infrared, multispectral imagery or LIDAR returns. The architecture proposed in our work is extremely useful for GPS denied navigation, non-GPS covering areas i.e. indoor, tunnels, blind spots or even in more no-GPS extreme environments such as underwater and space.

2- Guidance (path Planning) and Control

Fully autonomous robots (all sorts) path planning within a congestive or unknown area can be very challenging. Our solution involves vision techniques to plan vehicles paths between to locations while avoiding static and dynamic obstacles by path adaptation. The suggested solutions we propose are fully passive and non-GPS based.

3- Autonomous Platforms Cooperation

Complex tasks are suitable to be carried by multiple autonomous vehicles or robots. Starting from two platforms to swarm of platforms, collaboration and cooperation are the must to develop to reach the required efficiency in dealing with the task(s) assigned. We provide solution through innovative research concepts linked to cooperative navigation, guidance and control either in centralized architecture or distributed architecture taking into account networks constraints and power management.

Sensor Fusion and AI for Enhanced Perception and Decision-Making Processes

1- AI based object/target detection, Recognition and Tracking

Perceiving the environment then interpreting it is crucial for autonomous agents (all robotics systems). Accurate detection and recognition of objects of interest within the perceived scene would help the subsequent task allocated for these agents. We develop based on deep Learning algorithms and other classical AI based techniques solutions that would segment the scene efficiently in 2D and/or 3D to provide an accurate understanding of the agent environment.

2- AI based VR/AR for Intelligent Teleoperation

Aiming at full autonomy of systems is great when possible but in a number of scenarios having the human operator kept in the loop is still the favored option. We develop intelligent teleoperation techniques so the operator has access, in an immersive manner, to the environment of operation through VR models that are augmented with real time sensory data in an AR framework. AI based techniques process the real time data provided to the operator and integrate object recognition schemes to help the operator assessing the scene and making, remotely, the best decisions.

3- AI based Decision-Making for Autonomous Systems

collect about the scenario of interest appropriate AI based decisions of the deployed autonomous systems are inferred. We develop distributed and autonomous task allocation, planning and re-planning algorithms based on AI schemes considering dynamic scene perception for robotic autonomous systems. Comparison of those algorithms with classical optimization-based decision-making algorithms are of interest here.

4- Explainable AI for Autonomous Systems

The adoption of AI techniques for perception or for decision making of the Robotic Autonomous Systems opens a new era for those systems to reach their full level of autonomy. However, the validation and certification of these intelligent algorithms are still questionable. We develop deep learning model solutions breaking the opacity of the standard “Blackbox” deep learning networks to provide explanations of the decisions made by those algorithms.  This type of models provides more assurance regarding the applicability of these AI based techniques to autonomous systems.

Potentially interested PhD candidates who are fully funded and other ones looking for funding are welcome to make contact. Visiting students and Fellows are also welcome