As I drove through the neighborhood, screeching tires drew my eyes to a child tumbling off the hood of a shiny new sports car. I called 911 and the robots arrived just a minute later to assess the situation. The paramedics were on the way and wanted to know what they'd be facing when they got to the scene. They connect by WIFI to robots to see what they see, hear what they hear, and ask them questions to assess the situation.
The robotic sentries are a real convenience of the future. They cut crime with their watchful eyes. They relay important information about tragedies like the car accident to the experts miles away. These robots will be preprogrammed for particular scenarios so they will be able to provide assistance to humans and identify critical information. Eventually they will replace the paramedics all together. We already have robotic surgeons
But what happens if the robot encounters a scenario that doesn't match any of its preprogrammed scenarios? The robots will be quite intelligent in the sense that they are full of existing information, but it will also be important to process information that didn't exist before the robot. How are the robots going to accumulate and integrate new information into their existing database of knowledge? Once acquired, how are they going to communicate unknown concepts to humans and other robots?
This is one of the issues being addressed by the Semantic Web
and the problem there is a difficult one. It is much easier though, than our real world example where robots run around the world. On the Semantic Web, humans are categorizing information into ontologies that will help the robots ingest and utilize information more effectively. But these ontologies don't exist in the real world. How will a robot tree surgeon know if a particular tree is deciduous
? How will it know a particular plant is a tree at all or be able to identify if a tree is jeopardizing power lines?
The information behind these questions can be stored in a relational database. There may be a table for trees with common names, descriptions, family, genus, and species. Perhaps an image for visual identification. But this database will never be complete. Scientists discover 50 new species every day.
Robots will too. Robots will discover new concepts more frequently than humans by orders of magnitude. How are they going to store and categorize that information, share it with other robots and humans, build reports analyze and use it to make the world better?
Humans do this as we mature. We learn something new every day, right? As we learn new languages, eventually we are able to ask questions about new things or abstract concepts using our vocabulary of smaller or more familiar words. There is a certain minimum vocabulary needed to bootstrap -- apparently it's around 3000-4000 words
. This isn't the best analogy, because many of the words are translations into a familiar language and they both describe the same world. Learning a new language requires finding new words to match words we may already know, where our robots are encountering totally new information that doesn't exist in any information set they know.
What then, is the minimum corpus of knowledge necessary for a robot to exist independently? The world around us is more than just words, it's behaviors, physical laws, and mathematics. Interacting with humans compounds the complexity. Each one is unique. How much will a robot need to know about a person to evaluate benevolence or malice?
In the information world around us, humans have hand built thousands of data warehouses, many of them in robots already, but to be successful, our robots will have to do it themselves in an automated fashion. A robot won't have time to ask a human, "How do I model my information?" People are too slow. Instead, robots will examine the world and derive properties of objects for fields in a table. They'll build data models and construct relationships. They'll put nice interfaces on the information so they can communicate it to humans. It's from this data in context that meaning will show itself.
Qrimp is an ancient ancestor of these robots. While Qrimp can't understand meaning in the spreadsheets you give it -- not yet, it can
present it to you in ways that will help you get a better understanding of it. It can automatically find relationships in your data and build data models and reports for you. This is one step in the process. We still have a long way to go. It will be an interesting journey.