How much of AI is really AI?

Builder.AI was caught out using coding pros from Bangalore to pretend as though it had a Generative AI program capable of building functional software. This was a billion dollar company backed by a number of notable investors (including Microsoft!) so the reveal was a total shock to everyone involved. This wasn’t some shady NFT startup, or a mathematically-improbable better-living gadget – this was AI. This was supposed to be trustworthy. This was supposed to be measurable!

How did this happen? The truth is, big investors at this point (even tech-focused ones like Microsoft) are comfortable never seeing the ‘inside’ of the project when it comes to AI. The engineers can explain it, and can demo it, so it must work like they say it does, right? This is, in fact, a known issue with AI programs: once constructed and trained, the process the GenAI takes to get to a particular output is not traceable. It becomes a black box. Asking how the machine got to an answer will lead the asker in circles. Training data, the neural network itself, a change made back in 2023, more training data, tweaks to remove bias, et cetera, and a lot of ‘maybes’ come forth, but no clear answer. So people stopped asking! The asking was useless! We got to a point where Microsoft was not able to successfully verify there was any actual AI program in existence at all. In this case, what would that look like?

This is not the first time the companies have faked the abilities of their AI. Similarly, but differently, users of a number of AI apps would find that their request had been modified slightly to include the phrase ‘ethnically ambiguous’. Machines can be biased if their training info is biased, and in an attempt to correct this, the apps’ engineers would insert phrases at the end like ‘ethnically ambiguous’ (most famously in the picture of homer being threatened by knives, seen here on KYM: https://knowyourmeme.com/editorials/guides/why-is-ai-inserting-the-phrase-ethnically-ambiguous-into-ai-generated-images-racial-bias-in-ai-art-explained) to attempt to compensate for the training pool. Unfortunately, the machine would often include the actual text somewhere in the picture!

That’s silly, right? But it’s a limit of AI. The researchers have put in a huge pool of data. It cannot be tilted by more training data as it’s consumed pretty much all there is to offer on the open web and its biases have to be corrected on the input side because the actual generation is so much more difficult to adjust. Imagine trying to tamper with a normal person’s imagination when you tell them to picture Homer with a bunch of knives pointed at him. Would it be easier to somehow start touching individual neurons and hope that as a result, when you ask that question, they’re sometimes imagining a Homer that could be Brazilian? Or just tell them that Homer is Brazilian in this scenario, and trust they’d know what you meant by that? The second one, right? AI is much the same, and it’s a limit of the tech that they couldn’t quite correct. They had to try and conceal it, because the hype train surrounding AI wasn’t really ready for the bad news of ‘the AI won’t make pictures of minority groups in positions of power unless specifically told to’. Seeing as one of the pushes for AI running everything is its supposed logic and rationality, this was a very ugly problem!

This new rash of pretenders is from much the same source. The code being generated by actual AI programs doesn’t always work, or sometimes calls in malicious bits of code from somewhere else, or it breaks other stuff it’s supposed to work with, or it’s redundant. So, the easier thing to do is to fake it, and hope the AI can catch up while the marketing and management team is buying time to fix it. Or at least make the problem more subtle. Maybe Builder.AI didn’t have anything in the works, or maybe it did eventually plan to replace it’s code-on-demand service with an actual AI eventually, but either way, this time, it lied!

Sources: https://www.techspot.com/news/108173-builderai-collapses-after-revelation-ai-hundreds-engineers.html

https://medium.com/@jyunhao/the-black-box-problem-can-we-trust-ai-generated-output-f3c51db4f8fd

Interested in real AI solutions? Talk to us: https://elixistechnology.com/managed-it-solutions/