top of page

Decision Intelligence & Cognitive AI.

Writer's picture: Rudy NauschRudy Nausch

Part 2.

This is a continuation of the Large Language Models and Decision Intelligence article previously done.


Having looked at Decision Intelligence and how LLM's could be applicable - defining the Data science applications and atomised the processes within a cognitive architecture from a social and behavioral science perspective, within an organisational structure and the supporting decision engineering components for such a system.


The term AI Agent is becoming increasingly more common, and we are seeing a number of applications in this space. Of interest was "ChatDev", a fully automated, AI software development studio that produced a game for less than $1 and in under 7 minutes.


A chinese university has used LLM cognitive architecture to create a game software development studio of cognitive architecture chained LLMs into the various roles needed to create a game and created a nifty GUI to monitor the progress.

Based on the waterfall model — a sequential approach to creating software — the process was broken down into four stages in chronological order: designing, coding, testing, and documenting.


During each stage, the AI agents chatted with one another with minimal human input to complete specific parts of the software-development process — from deciding which programming language to use to identifying bugs in the code — until the software was complete.


The researchers ran the experiment across different software scenarios and applied a series of analyses to them to see how long it took ChatDev to complete each type of software and how much each one would cost. The paper states about 86.66% of the generated software systems were "executed flawlessly."


Gamifying gamification for games....inception level stuff.


The games are on average around 160 lines of code, and are quite simple, however as a proof of concept this is a deep insight into our future and applicable to a cognitive architecture surrounding Decision Intelligence. The underlying capability that this technology needs is a basic logic unit provided by an LLM. The LLM can access specific tools, functions and capabilities defined by advanced prompt engineering and API's. There are already specific LLM's that act as multifunctional API bridges, such as Gorilla.


The basic logic unit is where this technology is stalled.


We know that LLM's can output incredibly lucid outputs to almost all requests, and these responses reflect the heuristic logic of their training data. Often, if not always, the outputs are not pragmatically actionable without retrieval augmented generation (RAG), and a smart prompt engineering framework, and often not even then. This is, in my opinion, the reason why the promise of LLM's has stalled slightly from the intense excitement we experienced earlier this year. Prompt engineering aside, the context window challenges are not easily addressable, even with workarounds like MemGPT, Reflexion and RAG. The net result is a summary of summary approach with some clever prompting to activate relevant model weights to specific requests.


The promise of Graphing and relational context understanding methods, working within Large Multi-Modal (LMM's) which can "see" and update these visual frameworks, could add the dimension we are missing. The emergent possibilities of LMM's are still opaque to non-big Tech participants, however these could be the "secret sauce" that enables functional Decision Intelligence agentic architectures. A working, trustable DI architecture solution will fundamentally change the face of business, and society.


As always, what an amazing time to be alive!




27 views0 comments

Recent Posts

See All

Comments


bottom of page