Why NOT use AI?


Last updated

It might seem like everybody is jumping on the GenAI bandwagon, but there are plenty of valid reasons to stand back, at least for the moment. If these are not enough to put you off, then it’s worth thinking about mitigations.

Individual engineer and business concerns

  1. There is a lot of churn in tools and practices so you’ll be navigating a constantly changing landscape and getting a workout on the upgrade treadmill.

  2. Best practices and productive approaches haven’t been worked out yet, so you will be spending time figuring it all out yourself.

  3. There isn’t unequivocal evidence that productivity gains are present, especially if you take into consideration the (quite real) possibility of skill atrophy over the longer term.

  4. Possible negative long-term effects on the skills of engineers (de-skilling due to over-reliance on AI).

  5. Ethical concerns about high energy and water use by data centres required to operate LLMs, the effects on labour market, and copyright issues (relating to both training data and generated output). We’re starting to see court cases around unauthorised use of data by AI companies. There are concerns about indiscriminate and resource-intensive content scraping, and so on.

  6. Legal concerns about copyright and licencing issues with generated output—no copyright on AI-generated output unless a “derived work” (a problem for brand assets which can be then copied by anyone, for example), possible accidental copyright infringement, possible non-compliance with licences applicable to code which was used for model training.

Technical concerns

  1. Possible negative effects on software quality due to hallucinations, plain buggy solutions, lack of business context, the need for eternal vigilance from engineers reviewing AI code, and replacement of quality with quantity (aka AI slop).

  2. Unnecessary complexity and tech debt. Generated code can be overly complex, fail to follow the project conventions, or otherwise be suboptimal in a multitude of ways. However, the sheer volume of code and/or the psychological dynamics could easily lead to the code being accepted, eventually leading to an overwhelming mass of complexity and technical debt.

Security concerns

  1. Generation of insecure or vulnerable code. Since LLMs are trained on publicly available code, they are going to reproduce whatever is there, including vulnerabilities. By some estimates, nearly half of AI-generated snippets contain security flaws like buffer overflows or missing validation.

  2. Novel attacks are being developed. For example, generated code may reference hallucinated dependencies (aka “slopsquatting”), which attackers then publish as malicious packages. Alternatively, attackers may encourage LLMs to pick up fake dependencies by publishing documentation etc.

  3. Prompt injection, which manipulates the LLM to produce unintended behaviour, including bypassing safety measures.

  4. Data exfiltration and leakage. For example, sensitive data (eg. API keys or customer data) may be added to the context window, and might thus appear in the output later—potentially in a different context where it may not be caught.

  5. Data poisoning attacks, where attackers corrupt AI training data, causing the model to insert backdoors or misbehave later.

Naturally, there is a variety of more complex scenarios as well. For example, in the case of agents, the combination of prompt injection and tool use can lead to data exfiltration (described as the lethal trifecta by Simon Willison).

When you consider that genAI tools are extremely general purpose, ultimately non-deterministic, and are constantly being pushed towards less supervised operation, the security implications do not look good.

Going deeper
Deep dive

By taking away the easy parts of his task, automation can make the difficult parts of the human operator’s task more difficult.

    The AI landscape is shifting fast. Drop your email below to get notified when I make significant updates to the site or write new posts.
    No spam, only updates related to No Hype AI.