
A team of researchers recently released data about “a deep-learning model that can achieve approximately human-level performance,” in coding called AlphaCode.
Could this model create code on par or better than humans? According to researchers, it isn’t likely.
They say the system – developed by DeepMind, a subsidiary of Google parent company Alphabet – may assist experienced coders, but “probably cannot replace them,” per a report this week in the Science journal.
However, Armando Solar-Lezama, head of the computer assisted programming group at the Massachusetts Institute of Technology, said “it’s very impressive, the performance they’re able to achieve on some pretty challenging problems.”
According to a study about AlphaCode published Wednesday, the model was developed using “self-supervised learning and an encoder-decoder transformer architecture,” and it performed well on the Codeforces platform. This platform regularly hosts computer programming competitions.
These competitions “are popular tests among programmers that require critical thinking informed by experience and creating solutions to unforeseen problems, both of which are key aspects of human intelligence but challenging to mimic by machine learning models,” the study explained.
AlphaCode, achieved an average ranking in the top 54.3% in simulated evaluations on recent programming competitions on the Codeforces, said researchers. It solves problems “by generating millions of diverse programs using specially trained transformer-based networks and then filtering and clustering those programs to a maximum of just 10 submissions.”
Study authors said that AlphaCode is the first artificial intelligence system to perform competitively in programming competitions.
Another type of artificial intelligence made headlines this week, leading some to question the ethical implications of this type of technology. It came in the form of portraits generated through the Lensa app and widely shared on social media. As of Wednesday, the Lensa was the top free app available through Apple’s app store, according to CNBC.
“It’s taking all kinds of biometric data from your face – not so much your body, and that’s a whole other topic – but your face, and it’s creating artist renderings of these,” said tech expert Jennifer Jolly this week during an interview with WBBM’s Noon Business Hour.
She said users should be careful about data they feed into AI apps. In addition to this concern, host Rob Hart noted that artists have complained about Lensa stealing work. Creators of Lensa claim said they haven’t stolen anything, though some have found their explanation flimsy.
“Many of the artists who are saying this isn’t right – they don’t have the deep pockets that the tech world has,” said Jolly.
ChatGPT, an AI chatbot that “creates surprisingly intelligent-sounding text in response to user prompts,” according to the Nature journal, has also caused worry in the field of academia.
“At the moment, it’s looking a lot like the end of essays as an assignment for education,” said Lilian Edwards, who studies law, innovation and society at Newcastle University, U.K.
As in the AlphaCode case, other experts note that the ChatGPT is limited and cannot really compete with humans. In fact, game designer Ian Bogost explained that the bot is “dumber than you think” this week in the Atlantic.
It “lacks the ability to truly understand the complexity of human language and conversation,” he said.
“This problem of new social content, new entertainment for people, is one of those society things that we have to figure out, we have to talk about,” said Jolly. “It’s one of many that are coming as we dig deeper and deeper into artificial intelligence and all of that.”