top of page

With AI-Native SWE, Don't Abdicate to Skynet

As the story goes, it was August 29, 1997.  The day the machines took over.  Skynet was granted control of strategic US Defense on this date and became self-aware.  It decided that all humans were a threat to its existence and triggered a nuclear Armageddon.  Under the leadership of John Connor, the human resistance eventually destroyed Skynet's defense grid in 2029.  However, "Terminators" were dispatched to go back in time and clean up the rest, including kill Sarah Connor before John Connor could be born.  Cyberdyne Systems made the fatal flaw to put too much trust in the neural networks of the AI.  This trust was misplaced due to arrogance, greed, ego.  And it was too late to correct the error.  Human existence was determined in a nanosecond.  For those of you too young to remember the 1984 film, this was one of Arnold Schwarzenegger's biggest blockbusters.


 Today, history is ironically poised to repeat this fictional tale.  A certain limited number of companies are being given disproportionate control over defense intelligence, warfighter theatre operations, coordinated control of drones, space-based LEO internet communications and robotic warfare.  All the basic components are coming into focus and the vision is clear with CJADC2 (Combined Joint All Domain Command and Control) and the "Internet of Military Things".   At the heart of it all is Software Engineering.  For all the talk of guardrails, it is here where the critical choices will be made.


 In my view and recent experience developing our AI-Native Software Engineering capabilities, what needs to be introduced are "human interlocks".  Interlocks connect two participating components so that the operation of any part is constrained by another.  This means that LLM's and Generative AI, while in control of generating code in AI-Native Software Engineering, must be constrained and controlled by humans.  However, if you look at almost all recent approaches to this paradigm shift (other than ours), whether it be LangChain, OpenAI or others, control is being abdicated to the soon to-be AGI.  Most importantly, LLM's should not be given control over environments.  They need to be treated as slaves, not masters.  The same agency can be created without giving these systems access to programs or file systems through "functions". 


 Instead, external control should be distributed into external and de-centralized Expert Systems (Declarative AI) that merely use LLM API's as slaves for artifact generation and nothing more.  Modern terminology for this older form of AI is akin to recent MRKL architectures - "miracle" standing for Modular Reasoning, Knowledge and Language.  While Declarative AI is not as easy or efficient to build out, it is controllable and observable.  Humans should assume the role of oversight and review, rather than abdicate to the laziness of single-shot or few-shot prompting to achieve outcomes.  Vertical AI Agents can automate the process of Software Engineering (as in the case of SoftwareFactory.ai and Advisor™ :: Architect), but human checkpoints ensure that not only the codebase is mitigated against probabilistic hallucinations, but humans also accelerate convergence on desired scope and prevent solution divergence.  Visible and local storage of generated code should be under the control of humans.  Dumping it into existing well-known cloud repositories ultimately controlled by the same organizations as the LLM providers creates the ability for the machine to self-improve, morph and defend itself. 


Observability and control are paramount if a future foretold in "The Terminator" is to be avoided.  We can't blindly trust the Cyberdyne Systems of 2025.  It could lead to Judgement Day.  Which leads me to ask the question: who is the John Connor of 2025?  Looking back, the obvious choice is Alistair Cockburn .  His earlier career emphasis on "Humans and Technology" highlighted his humanity and is reflected in his organizations name.  In my experiences with Alistair, he has mostly exhibited an anti-establishment, anti-Big Tech, sort of a "Rage Against the Machine" kind of demeanor.  He did battle in the early OOAD days, and cleaned up Use Case dogma and corporate capture.  He fought back against gangs and the UML mafia attempting to create "industry standards" to normalize a pre-ordained "kernel" for Software Engineering ecosystems based on existing commercial "cards".  He facilitated the launch of the memetic Agile Movement to give developers a voice against bureaucracy.  His resume seems to fit the bill of a rebel leader, don't you think?


 I'll be back.



Comments


Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • SDE on Twitter
  • SDE on LinkedIn
  • Twitter Social Icon
  • LinkedIn Social Icon

© 2024 SDE - SoftwareFactory.ai

bottom of page