Awakin AI Story
Interview: AI + Inner Transformation
EVOLVE: Today, we sit down with Nipun Mehta. He is the founder of ServiceSpace, a global community working at the intersection of technology, volunteerism and a gift culture. As a designer of large-scale social movements that are rooted in small acts of service and powered by micro moments of inner transformation, his work has uniquely catalyzed "many to many" networks of community builders grounded in their localities and rooted in practices of cultivating connection – with oneself, each other, and larger systems. Today, ServiceSpace reaches millions every month, is powered by thousands of volunteers, and blossoms into ever-expanding local and virtual service projects that aim to ignite a "whole great than the sum of its parts". Nipun was honored as an "unsung hero of compassion" by the Dalai Lama, not long before former U.S. President Obama appointed him to a council for addressing poverty and inequality in the US. Yet the core of what strikes anyone who meets him is the way his life is an attempt to bring smiles in the world and silence in his heart: "I want to live simply, love purely, and give fearlessly. That's me."
In 2023, ServiceSpace has been doing a lot of AI related experiments. So we asked him a few questions about how artificial intelligence might help expand human consciousness.
What are the opportunities and threats of AI in the evolution of human consciousness?
In the AI world, a common refrain is around “alignment”. How do we align AI’s progress with human potential? That is to say, can we ensure that AI doesn’t use its capacity to further goals that don’t keep humans at the forefront? That’s a baseline metric to gauge whether AI will be a net positive or negative. Open AI, the company that created ChatGPT, has just announced that it will dedicate 20% of its resources to keep this at the center.
But when thinking about the evolution of human consciousness, we have to ask much deeper questions. Alignment with what kind of human? By market standards, for instance, fulfilling the needs of a consumer is a good goal. Multiplying their wants might even be a greater goal for lifting up GDP. But I’m not so sure that’s a win.
So I would frame it like this: if the majority of the AI evolution is guided by market logic, I think it’s a threat. If, instead of guide by the logic of love, community, and connection, then I think it can offer an unprecedented opportunity in our history. As one AI godfather put it, this invention is akin to the invention of a wheel – and I sense he’s right.
Why did ServiceSpace start to create your own version of ChatGPT that you call ServiceSpaceAI? How does it work? How does it differ from ChatGPT and other such approaches?
Our first intention was just to get in the game, so we could explore how to bend its larger arc. Our first beta project was in 2018, where we create a version of “CompassionBot” – it ought give a compassion score to some content, what we might call sentiment analysis now. Fast forward to early 2023, ChatGPT was the hype, so dived in a bit more.
The first thing we noticed was that ChatGPT is very “horizontal” – it took in the entirety of the web and then responded to us in a synthesized way. That, however, loses out on a lot of context. So we added a contextual layer to ChatGPT to see how much of a difference that kind of prompt engineering could make – it turned out to be massive! That led us to help create our own “vertical” solutions for our large, 25-year data repository of ServiceSpace. That yielded impressive results, so we then started helping others with specialized data – authors like Sharon Salzberg and organizations like Greater Good Science Center. Then, we went a step further and helped individuals create their own bots. We’re learning a ton through all these experiments.
Even with all that we’ve learned so far, there are literally hundreds and hundreds of applications that can nourish the human spirit. For instance, our AI can scrap all the news stories of the day, select the most compassionate ones, write summaries for each, tag them with searchable keywords, and publish them on our KarunaNews.org portal. That’s a huge win, even in the short term. So we’re exploring.
As we dived in, we quickly noticed that the data used by ChatGPT was quite weak. So we started structuring a bunch of the data on our end around values like compassion and kindness, and soon, will fork off a version of a “large language model” and start to fine-tune its data. Even beyond that, we hope to rely less and less on corporate influences and expand into wider-margin possibilities.
We are also keen to do “human reinforcement” learning – humans teaching the machines. ChatGPT hired all this cheap labor online to do a quick-and-dirty job to be the first to market – and now they just have machines teaching machines. Such shortcuts have always been the bane of human existence – they lead to all kinds of unintended consequences. Instead, can a Wikipedia-like community of volunteers help guide the arc of this innovation? In ServiceSpace, that’s always our wide-margin goal – to see how AI can help create volunteer opportunities, in a way that regenerates intrinsic motivations, cultivates community and deepens our consciousness.
So we are considering all this innovation from that lens of inner transformation, not just at a product level but even at the level of process.
The answers of ServiceSpaceAI contain the wisdom that was put into it, but isn’t that a simulation of wisdom? Isn’t wisdom a quality of consciousness that goes beyond certain words or ideas? Or do you think AI can become wise, or support humans becoming wiser?
A simple way to consider the current generative AI tools is as a discovery engine. Right now, to discover content that might be relevant for you, you type in a few keywords on Google and figure out how to synthesize the dozens of website matches and their subsequent content. AI takes that to another level, not only by synthesizing and responding in our native languages but by being interactive, where you can ask counter-questions and zoom into what you care about.
If I consider 25 years of all of ServiceSpace insightful blogs, good news stories, interview transcripts, inspiring videos and so on – without AI, it’s simply not possible to make sense of all of it. When my parents first played with it, they asked all kinds of questions they have usually gotten from others for decades (like how do you pay the bills if you serve the community without a price tag?), and the Awakin AI synthesized nuanced responses rather remarkably!
Right now, AI is simply a pattern matcher. It doesn’t “think”, the way we typically understand that word. So before it spells out a word, it's trying to figure out what goes next -- with its trillions of parameters and data scrapped off the Internet. When it gets stuck, it jumbles things together probabilistically (often referred to as “hallucinations”). The fact that we, as meaning-making creatures, find value in this relatively basic pattern-matching apparatus might mirror something uncomfortable about us – but that’s a sidebar topic. :)
In terms of AI helping us with wisdom discovery, I don’t think any of us mind that. That’s what books helped us do. But what feels like a threat is – will AI become aware of itself, will it have an inner experience, will it become conscious? Computer Scientists call this AGI – artificial general intelligence – and it begs the larger question known as the “hard problem of consciousness” among philosophers.
I don’t know the answers to those. But if you look at our brain, our neurons have a very low coefficient of consciousness, but at some density of interconnection, we become conscious. Can the same thing happen with a machine?
Even if the answers by ServiceSpaceAI are moving, compassionate, and insightful, they cannot replace direct human contact, in which a true heart-to-heart connection is possible. How do you see the danger that people lose even more human contact and retreat to such Chat-Machines that are easily accessible?
It’s worth nuancing what “heart to heart” connection actually means.
Social media has hacked our attention, so our average attention span is now down to 8 seconds – less than even goldfish, previously the lowest of all species! When we can’t pay attention, there’s very limited scope for us to do anything from our heart. What we call a human connection is all too often a transactional relationship of convenience. I remember that movie, Jerry McGuire, where Tom Cruise says in the elevator, “You complete me.” At one level, that feels heart to heart, but isn’t it just normalizing using another person to fill a lack in you? That transactional foundation is a very dangerous ground to stand on. Surely, sustained transactional relationships create an attachment that offers some semblance of security, and can a machine provide that? Clearly, it must if 87% of millennials sleep with their phone by their side. Where will that go next with exponential hardware and software advances? I think the writing is on the wall.
That said, are humans capable of relating beyond transactions? Absolutely. Just this week in Colombia, I gave a few talks on “Future of Relationships”, and how we might connect with each other in a multi-dimensional way – even in the age of AI. If we use the “me” logic, which at scale is the market logic, I’m afraid we’ll be seduced by the conveniences of AI; but if we use the “we” logic, or the “what would love do” logic, we will start to honor our relationships as a gateway for inner and collective transformation and healing.
So then, I think your question actually is – will AI help us cultivate non-transactional connections? I think there’s a scenario where it does. But before that, the more pressing question might be for the person in the mirror – how much of my day was rooted in a transactional consciousness, and what am I willing to do to shift that center of gravity? If there’s enough of us care to operate with wider circles of reciprocity, we ought to be able to arrive at a win-win scenario with AI.
I asked ServiseSpaceAI, “How can I support someone who has lost a loved one?” and the answer was quite stunning, ending with, “In the end, it isn't about taking the pain away, because unfortunately, we can't. It's about walking alongside them, bearing witness to their sorrow, and reassuring them they're not alone in their grief. We can't make their journey less painful, but we can make sure they don't have to journey through grief alone.” At the same time, I was struck that the machine says “we”, as if it is a person. What do you think about that?
Some people are very uncomfortable when a machine pretends to have an identity, emotion, love, and so on. Right now, you can program it not to do that – so problem solved. :) But at what point does a silicon-machine belong in the same way that a carbon-human belongs? Who decides those parameters? These are open questions. Even in a human-only ecosystem, it’s hard to discern the line between belonging and misappropriation.
From a more practical lens, we are also taking a unique approach here. Typically, we buy products because they solve a problem for us. Then Big Tech profiles each of us to deliver far more contextual solutions – and even do predictive analysis, of what we want before we want it! As soon as I search for flights to Colombia, I start seeing ads for learning Spanish! Underneath all this is a subtler presumption – that people want answers. If I have a question, I buy a product to find an answer. That may be a decent logic for commercial products, but we are testing a different hypothesis – that machines are most deeply aligned to human intentions when they provide us questions for our innate intuition to guide. Like Socrates, if I approach an AI with a question, can it help me refine and nuance my inquiry and then outsource it to my inner consciousness to find a contextual answer?
So, on top of content and personalization, we are actually adding a layer of depersonalization. Instead of an answer, we think technology is best when it can bring us to a better question. Like in your response, the AI doesn’t jump to “step 1, 2 and 3 to help someone with grief,” but rather invites you to step into your human self more fully and bear witness.
You mentioned the difference between Metaverse and what you call the Metta-verse. Can you explain what you mean by that?
Metaverse is a commonly used term in the tech world, implying a more meta, virtual universe on top of the physical one. Think of it as an ecosystem of purely digital relationships. My avatar engaging with yours.
Mettaverse is a play on words, where metta is an ancient Pali and Sanskrit word that translates to loving-kindness. Instead of giving up on human relationships' messiness, can we build our equanimity and stay in the “mud” long enough for the lotus to arise? Martin Luther King Jr. once said, “Peace is not the absence of tension.” If we cultivate our inner resources to hold the tension long enough, like a guitar string, our connections might unlock a unique melody that will not only bring us together but organically cultivate a field of transformation for everyone.
If we are rooted in a culture of convenience optimized for short-term gain, then the metaverse is quite appealing. Already, 2.5 billion gamers spend more than 8 hours a week inside video games. All in haptic tech and augmented reality tools, and that engagement will sky rocket.
But humanity owes it to itself to make a compelling argument for why the metta-verse is a *much* better party!
Service, giving, compassion, and loving kindness are at the core of your work. These are human qualities that AI cannot have. But you seem to be confident that AI, used correctly, can help humans individually and societally develop these heart qualities. Or how do you see that?
I think there’s a chance that AI becomes a tool that can support our inner transformation, reconnect us to each other and cultivate systems that regenerate these eternal values. There are so many projects, for instance, where AI is learning the language of hundreds of plant and animal species – with the hope of co-creating with them in the future!
Yet, we’ll have to learn how to draw the boundaries. If we are just in an endless race to go faster and faster, we’ll crash at some point.
Regardless, if you have a heart of service, there isn’t really an option to ignore this massively disruptive wave of innovation. Either we bend the arc of its manifestation, or we keep trying. Regardless, we must engage.
You mentioned in an earlier conversation the difference between content and context. AI is good at creating all kinds of content. What do you mean by context and how does this relate to developing our human consciousness?
In a high school classroom, the algebra lessons are the content – but the math teacher’s character is the context. When I wish happy-birthday to my friend, the message on his social media wall is the content but my wish for his well-being is the context. When my mother cooks me a meal, food is the content – but her care is the context.
At a retreat we held at the Gandhi Ashram, one of the youngest volunteers was a 19-year-old, who was confounded when someone asked her, “When you are giving a gift, why do you spend so much time wrapping it?” How much value of the gift lies in the wrapping? In a content-heavy world, we are continually stripping out the context -- but the gifts of what we do or say have always wrapped by who we are. Same words, same actions, same prayers by two different people have two different effects. The medium affects the message. And maybe the medium itself is the message? That perhaps our greatest offering isn't merely what we accomplish, but who we become by what we do. Upon reflection, this teenager concluded, “We are taught to be the gift, but I am called to be the wrapping.”
There’s an important lesson there – content centers around our sense of Self, while context decenters the self. When we focus on being the wrapping, we leave the gift to grace.
In your work, emergence plays a bit role -- the new potential when many people are connected in love and giving. AI machines can only combine existing data in a new way, so they are not capable of emergence. Or how do you see that? Can AI support human emergence into new qualities of consciousness? If so, how?
For one, machines are very capable of emergence. They always do unexpected things, often known as “hallucinations”. In a now-famous episode of 60 minutes, Google CEO spoke on national television about how its AI learned “Bengali” even when no one instructed it to. Of course, that then begs the question of other nefarious things it might also learn too!
Can AI support our inner transformation in a way to interconnects us and gives rise to an emergent collective intelligence? As I mentioned earlier, I think the jury is out. It depends on what consciousness we bring to the table as we ask that question.
ServiceSpace also created bots drawing on the writings of specific authors like Buddhist teacher Sharon Salzberg. And I think you are planning more in that direction. In the spiritual traditions, the relationship between student and teacher is a very intimate, individual one, that cannot be replaced by a machine, even if it has access to all sayings of a spiritual teacher. How do you see the value and limits of such “teacher-bots”, or specific theme-bots about Permaculture?
Teachers frequently get asked the same set of questions. So they often have various “FAQs” (Frequently Asked Questions) to cover the basic inquiries.
But what if you put together all their content and allowed people to interact with it in a language of their choice? That ends up being rather meaningful, as we have seen across the board.
It’s also humanizing in so many ways. For instance, if I’m on a meditation retreat with Sharon Salzberg, I may never have the guts to ask her how she might approach cooking a souffle – but I can ask that of the SharonBot. :)
Of course, that’s just in the realm of content. If we start replacing context with that content, we will have cheapened that entire interaction. So if a meditation teacher starts thinking that instead of 100-person meditation retreat with 4 assistant teachers, now I can do a 1000-person retreat with 2 assistant teachers and a kiosk in every student’s room – I think that would be a disaster, as far as inner transformation is concerned.
So I think we’ll need to learn how to draw the boundaries skilfully.
The SerciceSpaceAI intro reads: “We can all imagine a GandhiBot, MandelaBot and TeresaBot, but how do we build a "many to many" field of datasets? And can someone create her own bot with a combination of PermacultureBot, MyMother'sBot, and BuddhaBot? What will it take to nurture infinite combinations of wisdom?” Can you explain what you envision here?
Imagine a world where each one of us has various companion bots. In fact, we don’t need to imagine this. This is already in each one of our pockets, every time we text someone and there’s an “auto-complete” suggestion which is your own bot of sorts. And this AI embedding is only going to skyrocket, especially in the coming year or two.
Right now, we don’t have much choice regarding the data-set we use for each of the bots. But what if we had open-source repositories of these different “personalities” that we could integrate into our many apps? It’s just a way to democratize questions like – “What would Gandhi do?” Or “What would my mother say?” Right now, we hold those aspirational inquiries only in certain moments, but maybe there’s a way to bake them into the very foundation of the many tools we use.
What visions do you hold for ServiceSpace to play a role in steering or educating AI more toward wisdom? What are your hopes that this can spread widely and be an actual force, given the profit-driven use of these machines by big companies that we see now?
ServiceSpace is oriented more towards a process than specific outcomes. Perhaps this process provides a bit of friction to afford a collective pause and rethink a deeper sense of alignment. Maybe we can create unique data sets around pro-social values that can be integrated into larger language models. Perhaps we can magnetize a volunteer army that invites intrinsically motivated human-reinforcement learning. And as we work with all this, maybe some new emergence might reveal itself in the space between all of us. All bets are on the table. :)
That said, we aren’t really championing a techno-optimist view. It’s much more of a compassion-optimist view.
Back in 1999, ServiceSpace started by building websites for nonprofit organizations. But it was never a “Internet is going to change the world and remove all suffering” kind of pro-tech play for us. Our core ethos centered around creating meaningful volunteer opportunities. A small act of service invites me to move from “me” to “we”, which is immediately rewarded by nature by the release of so many neurochemicals within me – it’s nature-funded! :) That then drops me into a deeper interconnection with life and cultivates a field of collective intelligence that is greater than the sum of its parts. That’s so deeply aligned with who we are that it triggers an auto-catalytic virtuous cycle. As we like to joke, the reward for service is … more service! That’s been the ServiceSpace mojo since its start.
So, how can AI be leveraged to create meaningful volunteer opportunities for building a societal foundation for intrinsically motivated service? That foundation, we feel, will have unending upstream benefits – well past the manifestations of the Internet, AI and beyond. Eighth-century poet, Shantideva, brings this infinite timeline into context:
“For as long as space endures, and for as long as living beings remain, until then, may I too abide to dispel the misery of the world.”