CAS: Recreating my self consciousness

2 years ago
11-02-2023
6 min read
1077 words

Creativity is the residue of time wasted”
Albert Einstein

Introduction

Man’s quest for immortality has never been closer to an end. This is the year 2023, and with the enormous advancements in artificial intelligence, it is becoming increasingly clear that the potential to achieve immortality is no longer a mere fantasy. I repeat this is not a drill! This CAS experience logs my attempt to model my consciousness using a neural network, and the results I achieved.

Demo

If you are anything like me, you just want to try the demo and mess around with it. Below is the embedded model. (Warning: It might swear!)

Link: /projects/manu-v2

❗ This model is extremely broken.

It has an accuracy of about 43% and there is really nothing I can do to make it better. That's all. Here be dragons!

Link: /projects/manu-v2

4/12/22

The story begins here. To understand how a machine thinks, I dove into the field of machine learning and neural networks. I initially wanted to use these tools to create a computer program that was “self-aware”. As I delved deeper, I realized that creating a conscious AI is a extremely challenging task, but I was determined to try. I started by studying the human brain and its complex neural networks, hoping to replicate its structure in a computer program.

The Architecture of the Human Mind

The human brain is a marvel of evolution, honed over millions of years to be the most efficient and adaptable machine on the planet. Could a mere computer program ever hope to replicate its intricacy?

The human mind works through a dense network of neurons firing and communicating with each other. Neurons are the building blocks of the brain, responsible for transmitting information throughout the nervous system.

When a neuron is stimulated, it sends an electrical signal down its axon, which triggers the release of neurotransmitters at the synapses. These neurotransmitters then bind to receptors on neighboring neurons, either exciting or inhibiting their firing. This complex web of neuronal connections forms the basis of all thought, perception, and behavior.

Replicating this using Mathematical Terms

To understand this more clearly, I looked at the communication of two neurons. This is essentially a matrix multiplication; we are given two matrices and input matrix and an output matrix

[x1x2x3xn]f[y1y2y3ym]\begin{align*}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_n \end{bmatrix} \xrightarrow{\quad f\quad} \begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ \vdots \\ y_m \end{bmatrix} \end{align*}

Each item in the matrix can be, say, {qR:0q1}\{q \in \mathbb{R} : 0 \leq q \leq 1\} . We define a function to model the trigger the release of neurotransmitters. This is called the “activation function”. It decides how “likely” the neuron is to activate, given a set of initial parameters. Here are a few common functions that are used in machine learning:

Using this, I created a million blank neurons to model an “empty” brain.

Making the machine “understand” language

To make a machine understand language, I used a technique called natural language processing (NLP). NLP involves teaching machines to understand and process human language by breaking down sentences into smaller components such as words and phrases, and then analyzing their meanings and relationships. The current model can only take numbers as an input, so how can we convert language into a matrix that the model can understand? We use the process of tokenization

For each word, I assigned a unique number, and for words that are out of the models vocabulary, I used an [OOV](Out of Vocabulary) token to indicate this. This in turn creates and embedding of each word with “strengths” and relationships with other words.

Training the Model

After building the architecture and preparing the input data, the next step was to train the model. I used supervised learning, where the model was trained using a dataset that consists of input/output pairs. In this case, that was Reddit comments, and Twitter threads.

Results!

To improve the initial results, I explored various sources and incorporated a simple spam filter to eliminate irrelevant posts such as "troll" comments and copypastas. After a week of refining, I was pleased with the progress I had made. I shared the link with the other students in the group chat, and I got a lot of positive feedback! Here are some of my favorite prompts/completions:

9/2/23

Initially, I viewed this project as a good for my CAS experience, but like always, I found myself procrastinating on the reflection log. However, during my experimentation, I had a rather peculiar idea - what if I could use this model to create a virtual clone of myself? Driven by curiosity, I began working on this idea.

Using my messages on Discord, and WhatsApp, I exported my chats to a JSON and created input-output pairs using

  1. @mentions, and replies
  2. time-based replies (if the message was sent within 30 seconds of the previous it was considered a reply)

With this, I fine-tuned the model to behave more like I usually do (one word responses, excessive use of emojis ✨, and occasional swears). I managed to scrape a dataset of 30,000 messages and used this to train the model further. With all that done, I made the demo public for the world to view!

Link: /projects/manu-v2

Reflection

Obviously, this is not an accurate recreation of who I am, but in the very near future I expect it to possible to create realistic clones of human personalities. I can now finally have my virtual self do all the things I've been putting off. Need to go to the gym? Send in the clone! Have an important meeting to attend? Send in the clone! Want to go on a date with that cute someone you've been eyeing? Well, maybe not that one.

But the possibilities are endless! My clone can handle all the mundane tasks while I enjoy my free time doing the things I actually enjoy. Who needs a personal assistant when you have a virtual clone of yourself? I can finally have my cake and eat it too.

Comments
Loading

Calculating the meaning of life... Please wait.