Practicing Safe Coboting
Like it or not you will be interacting with either physical or software robots soon. A cobot or co-robot (from collaborative robot) is a robot intended to physically interact with humans in a shared workspace. It’s not just the obvious robots to consider. Additionally, there are software bots that can either assist you with knowledge/data mining or actions that eliminate drone work. While there will be initial interfacing issues and learning curves to get by, there are a number of long term issues to consider when interacting with bots.
Bots Constantly Learn and Change
Software & Hardware that are pretty static and only change on periodic cycles is the norm for us these days. Our ability to absorb change as humans is somewhat limited. Bots (hardware or software) can learn on a continuous basis presenting a challenge to people. This can be a good thing in that contextually sensitive advice or actions can greatly assist our outcomes and experiences. It can also be a challenge in that the bots can adapt quicker than us even though they can’t necessarily think. We will have to absorb differences in bot behavior and even anticipate tactical and strategic interactions with bots. If bots can explain their incremental learnings leveraging explainable AI, some of the challenges can be obviated.
Bots Monitor You Looking for Better Practices
Bots of all kinds will be close to people in a more intimate way, therefore they can collect data and information about human behavior. The upside is that these bots can find emerging better practices and show the way to better outcome delivery. Built-in coaching can be helpful when raising the skill and behavior levels of people in both work and personal contexts. The downside of bot monitoring could be giving bad management information to turn against people. Imaging an obsessive-compulsive boss information of behavior deviation, who in turn, uses as a weapon against workers. If the bosses are monitored too, that would certainly be helpful.
Bots Can Give You Early Warning
Bots can watch multiple contexts at the same time. This means that people operating in a known context can receive notification of context spillover that may mean a pattern of opportunity or threat from emerging signals, patterns and new scenarios. Bots are much more omnipresent and more aware than people are in general especially when they are learning from emerging and fast data or information. People can benefit from early warning behavior for hardware or software bots. The downside is that there could be too many warnings to deal with while trying to accomplish outcomes ergo become an impediment to progress.
Net; Net:
We can be safe when interacting with hardware or software bots if we are aware, informed, trained and practiced at leveraging the positive aspects of these bots. Awareness of the underbelly of these bots and their ability to stay ethically tuned to the law, expected norms in culture and sensitive to human needs or not will play a big role in safe interactions with bots