Learning ResourcesRobotics

Facebook Open Source Droidlet Robotics Platform

Share it now!


Facebook Open Source Droidlet Robotics Platform

Introducing droidlet

Introducing droidlet a one-stop shop for modularly building intelligent agents. The annals of science fiction are brimming with robots that perform tasks independently in the world, communicate fluently with people using natural language & even improve themselves through these interactions. These machines do much more than follow preprogrammed instructions; they understand & engage with the real world much as people do.

Robots today can be programmed to vacuum the floor or perform a preset dance, but the gulf is vast between these machines & ones like Wall-E or R2-D2. This is largely because today’s robots don’t understand the world around them at a deep level. They can be programmed to back up when bumping into a chair, but they can’t recognize what a chair is or know that bumping into a spilled soda can will only make a bigger mess.

To help researchers & even hobbyists to build more intelligent real-world robots, we’ve created & have open-sourced the droidlet platform.

Droidlet is a modular, heterogeneous embodied agent architecture & a platform for building embodied agents, that sits at the intersection of natural language processing, computer vision & robotics. It simplifies integrating a wide range of state-of-the-art machine learning (ML) algorithms in embodied systems & robotics to facilitate rapid prototyping.

People using droidlet can quickly test out different computer vision algorithms with their robot, for example, or replace one natural language understanding model with another. Droidlet enables researchers to easily build agents that can accomplish complex tasks either in the real world or in simulated environments like Minecraft or Habitat.

There is much more work to do – both in AI & in hardware engineering — before we will have robots that are even close to what we imagine in books, movies & TV shows. But with droidlet, robotics researchers can now take advantage of the significant recent progress across the field of AI & build machines that can effectively respond to complex spoken commands like “pick up the blue tube next to the fuzzy chair that Bob is sitting in.” We look forward to seeing how the research community uses droidlet to advance this important field.

Facebook Open Source Droidlet Robotics Platform for more related stories Subscribe to robotics india news latter for more robotics related updates.

A family of agents

Rather than considering an agent as a monolith, we consider the droidlet agent to be made up of a collection of components, some of which are heuristic & some learned. As more researchers build with droidlet, they will improve its existing components & add new ones, which others in turn can then add to their own robotics projects. We believe this heterogenous design makes scaling tractable because it allows training on large data when large data is available for that component. It can also let programmers use sophisticated heuristics when they are available. The components can be trained with static data when convenient (e.g., a collection of labeled images for a vision component) or with dynamic data when appropriate (e.g., a grasping subroutine).

The high-level agent design consists of these interfaces between modules:

facebook open source droidlet robotics platform
facebook open source droidlet robotics platform

1.A memory system acting as a nexus of information for all agent modules

2. A set of perceptual modules (e.g., object detection or pose estimation), that process information from the outside world & store it in memory

3.A set of lower-level tasks, such as “move three feet forward” & “place item in hand at given coordinates,” that can effect changes in the agent’s environment

4.A controller that decides which tasks to execute based on the state of the memory system. Each of these modules can be further broken down into trainable or heuristic components.

This architecture also enables researchers to use the same intelligent agent on different robotic hardware by swapping out the tasks & the perceptual modules as needed by each robot’s physical architecture & sensor requirements.


Substantially reducing friction in integrating ML models

The agent illustrated above demonstrates how to build with droidlet using specified components, but this is not the only way to use the library. The droidlet platform supports researchers building embodied agents more generally by reducing friction in integrating ML models & new capabilities, whether scripted or learned, into their systems & by providing UX for human-agent interaction & data annotation.

The modules can all be used independent of the main agent & the state-of-the-art perceptual modules may be of particular value to other researchers, given that current off-the-shelf models are poor for robotic use cases. In addition to the wrappers for connecting ML models to robots, we have model zoos for the various modules, including several vision models fine-tuned for the robot setting (for RGB & RGBD cameras).

Droidlet is bolstered by an interactive dashboard that researchers can use as an operational interface when building agents It includes debugging & visualization tools, as well as an interface for correcting agent errors on the fly or for crowdsourced annotation. As with the rest of the agent, the dashboard prioritizes modularity & makes it easy for researchers or hobbyists to add new widgets & tools.

A powerful & flexible platform

For researchers or hobbyists, droidlet offers batteries-included agents that include primitives for visual perception & language, as well as a heuristic memory system & controller. Droidlet users can incorporate these modules into their robots or simulated agents by writing tasks that wrap primitives like “move to coordinate x, y, z.” These agents can perceive their environment via provided pretrained object detection & pose estimation models & store their observations in the robot’s memory. Using this representation of the world around them, they can respond to language commands (e.g., “go to the red chair”), leveraging a pretrained neural semantic parser that converts natural language to programs.

Droidlet is also a flexible platform & the ML modules & dashboards can be used outside the full agent. Over time, droidlet will become even more powerful as we add more tasks, sensory modalities & hardware setups & as other researchers & hobbyists build & contribute their own models.

Building intelligent machines that work in the real world is a fundamental scientific goal in AI. Facebook AI is helping the community by releasing not only droidlet & Habitat, but also other, independent research projects such as DD-PPO our advanced point-goal navigation algorithm; SoundSpaces, our audio-visual platform for embodied AI; & our simple PyRobot framework.The path is long to building robots with capabilities that approach those of people, but we believe that by sharing our research work with the AI community, all of us will get there faster.

Facebook Introduces New Platform For Building Robots. Droidlet is capable of building embodied agents that can recognise, react, and navigate their surroundings

Facebook has introduced Droidlet, an open-source, modular, heterogeneous embodied agent architecture. The Droidlet platform can be used to build embodied agents using natural language processing, computer vision, and robotics. Now with Facebook Droidlet platform, researchers can build more intelligent real-world robots. In addition, it simplifies the integration of a wide range of state-of-the-art machine learning algorithms and robotics to facilitate rapid prototyping.

Droidlet

A droid agent is considered to be made up of a collection of components, which are both heuristic and learned. Droidlet allows for component swapping and customisable designs. This software platform gives researchers a debug dashboard as well as a markup interface for correcting errors and annotation. Robotic-focused features such as robot-specific wrappers & environments and models built specifically for robots are also included.

Not only does the Droidlet platform have different tools and advancements in artificial intelligence, like PyRobot, AllenNLP, and Detectron, but it also connects these iterations of tools to provide a better and unified experience.

Droidlet lets researchers use different computer vision or NLP algorithms with their robots. In addition, they can use Droidlet to accomplish complex tasks in both real world or within a simulated environment like Minecraft or Habitat.

Droidlet is capable of building embodied agents that can recognise, react & navigate their surroundings. It simplifies the integration of various cutting-edge machine learning algorithms in these systems, allowing users to prototype new ideas faster than ever before.

According to the research paper, “Droidlet: modular, heterogenous, multi-modal agents”, The objective of the platform is to build intelligent agents that can learn continuously from their encounters with the real world.

The researchers hope that the platform for Droidlets may help to further their understanding of various areas of research including self-supervised learning, multi-modal learning, interactive learning, human-robot interaction, and lifelong learning.

Droidlet provides “battery-included” systems for researchers and hobbyists that have access to trained object detection and pose estimation models to collect their observations and then store them in the robot’s memory. To convert a natural language statement like “Go to the red chair” into a programme, the system activates a pre-trained neural semantic parser.

What’s Droidlet made of?

Facebook Open Source Droidlet Robotics Platform | Droidlet comprises a collection of various components, some based on statistical heuristics, and others trained via machine learning. The user interface of the Droidlet is made up of the following elements:

1.Memory system: An information storage and retrieval system that operates across various modules.

2.Perceptual modules: A set of perceptual modules which are typically used to obtain data from the environment and store it in memory.

3.Lower-level tasks: Tasks that, at the lower level, affect the agent’s environment include things like “move three feet forward” and “place item in hand at given coordinates.”
Controller: A controller that decides which tasks to perform depending on the current state of the memory.

This platform is capable of delivering robust and sophisticated results while operating as an independent component. In addition, the Droidlet system will be able to grow even more robust over time, as they introduce new functions and capabilities based on sensory modalities or hardware setups others have added to the system.

Meanwhile, Robotics software development is increasingly becoming a large portion of Facebook’s tech operations. It recently teamed up with Carnegie Mellon & Berkeley to teach robots how to adjust in response to different environments in real-time. It will be interesting to see how Facebook’s latest open-source platform works out for researchers in the coming days.

Facebook Open Source Droidlet Robotics Platform available on the following link

https://github.com/facebookresearch/droidlet


Share it now!

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Bro, Dont Copy & Paste Direct Share Link.. !!