AI is becoming more and more ingrained in our daily lives, from ChatGPT-enhanced Bing search to GitHub Copilot. While generally beneficial (machines do more work so people can devote their time to other activities), you need to have a decent level of experience in a particular sector to be able to trust the results AI provides. According to Ben Kehoe, a former cloud robotics research scientist for iRobot, users still have to decide whether the AI’s recommendations are worthwhile because they are ultimately responsible for them.
We are in the uncomfortable toddler stage of AI, where it has great potential but isn’t necessarily apparent what it will develop into. I’ve already stated that the greatest achievements of AI to date have not been at the expense of people but rather as a compliment to them. Imagine a large-scale network of machines answering requests that people could handle, but much more slowly.
Things like “totally autonomous self-driving cars”—which are anything but—are already a reality. The laws still forbid a driver from blaming the AI for a crash (and there are plenty of crashes—at least 400 last year), despite the fact that the AI and software are not just not quite smart enough yet. Another example would be during the public unveiling of the new AI-powered Bing, ChatGPT is amazing until it starts fabricating information.
Not that these or other applications of AI are bad. It serves as a reminder that, as Kehoe contends, people cannot hold artificial intelligence (AI) responsible for the results of utilizing AI. “A lot of the AI ideas I see imply that AI will be able to assume the complete responsibility for a certain activity for a person, and implicitly presume that the person’s accountability for the task will just sort of… dissipate,” he emphasizes. In the event that a Tesla collides with another vehicle, people are at fault. Also, if DALL-E misuses protected material or commits copyright infringement, they are liable for their actions regarding ChatGPT.
For me, employing AI technologies like GitHub Copilot for work makes this accountability even more important.
Finding developers who use Copilot is not difficult. One developer found it to be “wonky” and “slow,” but welcomed the quick API suggestions. There are numerous other conflicting evaluations. Developers appreciate the way it expands boilerplate code, detects and suggests pertinent APIs, and does other things. The fact that Copilot’s recommendations are “usually accurate,” according to developer Edwin Miller, is both a plus and a drawback. The fact that Copilot can be trusted the majority of the time is a wonderful thing, but it also presents a challenge. You need to be an experienced developer to realize when its recommendations shouldn’t be taken at face value.
Once more, this is not a serious issue. It’s a good thing if Copilot can assist engineers save some time, right? That is, but it also implies that developers must be accountable for the outcomes of utilizing Copilot, making it not necessarily a wise choice for developers who are just starting out in their careers. For a less experienced developer, what seems be a shortcut could produce subpar results. A novice shouldn’t attempt to use those shortcuts because doing so could hinder their ability to learn the art of programming.
So sure, let’s use AI to make driving, searching, and programming better. Let’s not forget, though, that knowledgeable individuals must continue to operate the vehicle until we can fully rely on its outcomes.