-
Notifications
You must be signed in to change notification settings - Fork 6
/
Copy pathtranscript_cleaned.txt
107 lines (54 loc) · 24.6 KB
/
transcript_cleaned.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
My name is David Shapiro, and I started researching artificial intelligence independently in 2009. I've been following the progress of deep learning ever since, and once GPT2 came out, everything started really changing. GPT3 came out, and the rest is history.
I started sharing everything I do on YouTube, and it really took off. Basically, what I do is pair programming sessions, and everyone seems to love watching me fiddle through and take a wild stab at these things.
Tonight, our first presenter will be Christopher Collin Tuno. Christopher is 35 and has been working with tech since he was a little kid. He has always tried to get involved with the most cutting-edge tech, and he has a vested interest specifically in natural language models, as he sees them as the future of what's being built in tech.
Christopher and his wife Melody have a background in religious philosophy, and they started on this journey about 15 years ago when they wanted information and knowledge to be able to be passed around more freely. They have theorized and philosophized about how information could be transmitted freely and to everyone, and over the past 15 years, that has led them to blockchain and homomorphic encryption. Ultimately, it led them to my YouTube channel, where I am tinkering with prompt engineering, which has been really helpful.
Gabe Stevens is 15 and discovered GPT2 with a friend. They saw how powerful it was, and with GPT3 coming out, he has been really interested in how it can be used to improve education. He has done a few small projects with that.
Jordan is also here tonight. He got into this technology through SEO. He is interested in how this technology can be used to improve education.
So, tonight, we will be discussing how natural language models can be used to improve education. Christopher will give a brief presentation, and then we will have a round-robin discussion where everyone will get a chance to share their perspective and questions. After that, we will move to an open discussion.
I'm a jack of all trades really. I've traveled all around the world and done lots of different things. And I've really got into ai art and I'm also doing quite a lot of websites and blogs and things. And I can really see the potential to use ai to make it look like I'm a thousand people when it's still just some guy sitting in Australia.
So right now I'm focusing mostly on art but I'm also working on trying to get some blog content for my different businesses like my toy company and my econ companies. I speak quite a few languages, which is very useful when filming around the world.
In today's world, there is an overwhelming amount of information available, and it can be difficult to sift through everything to find what is most relevant and useful. This is where an AI assistant could be beneficial, by acting as a guide to help you find the information you need.
There are many potential applications for an AI assistant, such as helping with daily tasks, providing financial advice, or even offering marital counseling. The AI would be able to draw on a vast amount of data and knowledge to provide personalized assistance.
Of course, there are some challenges to implementing such a system, such as ensuring that the AI has our best interests at heart. But overall, an AI assistant could be a valuable tool for helping us navigate the ever-growing sea of information.
We're no longer having the information that is for is in the forefront of what would benefit us, so we would want an ai assistant that helps present this through all the paid advertised information and get down to the bottom line of what would benefit you what would be in your best interest. And yes, a framework like this uh would need to be open secure and decentralized. And that's another thing that we are also working on is how to make this type of technology actually be secure.
Ultimately we would need it to be something that is so secure that if you had a mic and a camera watching and listening to everything you didn't say it would never be capable of being used against you nor could any of that information fall in the wrong hands even your own. Uh we are currently figuring out a way to do this and the answer lies in cryptography and that is an entire other non-profit that we are actively working on on how to build a technology to ultimately deploy this in a way that would be that safe and secure and yet still open.
Now I don't uh know how many of the people here I would love to hear your projects if you've come up with an idea for an ai assistant I would love to hear some of the thoughts or a piece of this that you've been working on. Dave was working on raven which his mission is to reduce suffering increase prosperity and increase understanding these are our ideals but we're right now just in the beginning phase of coming up with what would be the ideals of this ultimate personal assistant that is not bound by a company but is completely autonomous from anything else.
What would it be like having a ai that is autonomous right now we're going to use all the centralized means available to us to build it out so that we can understand how it would work what we wanted to accomplish and we will move it over to being completely decentralized at some point.
I would love uh one of the projects that is coming up that we're working on is a civil servant ai we would like to be able to have an ai just communicate with everybody within a community and get their thoughts and feelings and their hopes dreams and wishes for what they want done and accomplished. Usually this would just point them to resources but oftentimes action needs to be done we would like it to be able to collect notes on everyone's thoughts and feelings so that a comprehensive plan could be built out. It would then re-collaborate with them in rounds of communication with everyone to ensure everyone's on board and that there is a consensus to this thing that these people have a vested interest in.
Eventually we'd even want to make this so that we could take the informal logic of language and then move this to a formal logic which is a project we are also working on so that this can then be made into a comprehensive plan of execution for if you're building out a municipal for a local community how can you manage that community manage the funds manage the wants and desires with the funds of that community because oftentimes the most amount of work is just communication with everyone to be on board to build out those bylaws and the formal logic would make it so that there is no interpretation once it is written out and decided upon there is no going back and saying well it means this no it means that the formal logic would make it hard stated in a hard way that is non-debatable.
But we're getting the point now where ai can do the hiring the firing the management of everyone and this is where you're taking those 3 000 administrators that would have been needed just for a small community and now giving them the power of having a much better managed municipal that is a project at theocracy that we will be launching soon.
We would like to work with as many of you as possible in just building out a think tank for your open source projects uh like this one in this emerging field were working with people to collaborate on their projects uh hopefully we can build out a larger project as a whole as eventually all these little projects will probably be assembled into a very vision a grander vision but right now we're just in basic research all of us in this field we're figuring out how we can summarize how we could change how news is distributed we're just learning what are the possibilities and we'd love to collaborate and figure out some of that stuff.
Some of the advantages of working with theocracy as a charitable non-profit are we can get grant research and grant writing done we can do fundraising along with other types of funding options for your project or just reach out to us to get started and help build out a team especially if it's just research related alone to yourself. Um we also promoting i'm getting a lot of thunder and lightning so if we drop out suddenly it's because the electricity suddenly cut out.
We also have just so many advantageous perks available to us to promote some ideas that you might already be working on for your open source project we get over a quarter million in ad grants so if it needs visibility to get more people involved with your project that is something that is there that we can help promote. Um if you're trying to build out a frozen language model and it's one of those open source models we can work with amazon to get tens of thousands of dollars to help train that it benefits amazon at the end of the day anyways because more people will launch that model so they have and they get tax deductibles for full tax deductibles for giving the organization that money.
Um so uh we're looking forward to hearing from you and reaching out uh to us with cognitive ai labs discord uh just dm us for any business inquiries otherwise uh please uh type in the discord and thank you for giving us a listen. All right um yeah thank you guys for um for sharing. Um I know that uh the the scale and scope of what you guys are working on goes well above and beyond what you've shared today um so I expect you'll probably come back and share a
Christopher Mims' presentation on chatbots and the future of the internet was very insightful. I particularly enjoyed hearing about the work his team is doing with the D-Bios Foundation on Fully Homomorphic Encryption (FHE). This technology has the potential to revolutionize the internet in ways we can't even imagine.
I was also intrigued by his idea of an "artificial cognitive entity" or chatbot that could act as a lifelong companion. This is something I've always thought would be amazing to have in real life. It would be great to have a chatbot that knows you and can give you advice on things like what to eat or how to get a job.
Overall, I think Christopher and his team are doing some really innovative work that has the potential to change the internet as we know it. I'm looking forward to hearing more from them in the future.
The value proposition of combining GPT3 with the blockchain is that it would allow for a decentralized, democratized database that could be used to share information generated by the model. This would be a good way to use a blockchain to manage the servers and decentralize them, but it would be at a layer 2 solution and would not be the best way to run the model itself.
Homomorphic encryption is a form of encryption that allows for the manipulation of data while it is still encrypted. This is done by using a mathematical function that is similar to the function that is used to encrypt the data. This allows for the data to be manipulated without having to decrypt it first.
Blockchain is a distributed database that is used to store data in a secure and transparent way. The data is stored in a chain of blocks, each of which is linked to the previous block. This makes it difficult to tamper with the data, as any changes would be immediately apparent.
The reason homomorphic encryption is used with blockchain is for security. The data stored in a blockchain is often sensitive, and so it is important to ensure that it is secure. Homomorphic encryption allows for the manipulation of data without decrypting it, which makes it more secure.
The downside of using homomorphic encryption with blockchain is that it is very inefficient. The blockchain is simply not designed to handle the amount of data that is required for homomorphic encryption. This makes it difficult to use homomorphic encryption for anything more than simple tasks.
Despite the downsides, homomorphic encryption is still a useful tool for security. It is important to remember that the main purpose of using homomorphic encryption is to protect data. The efficiency of the blockchain is secondary to this goal.
I tried to select one conservative republican, one progressive democrat, and one hyper-left or hyper-right story from different angles, and I would do several of these per week. I sent them out as an SMS service, which took me two hours every Sunday. I eventually gave it up, but I think something like this could be done very well with GPT3, both in terms of selecting stories and sending them out.
I'm the founder of Lexi.ai, and we're building a legal chatbot that provides instant legal answers and connects people to the right size legal services. I've been a fan of David's videos, and I'm excited to connect with everyone here.
To what Christopher and Melody just mentioned about pulling from different information sources, that's really one of the central goals of having an information companion or an information concierge. But the platform they're working on is not just about protecting the community's data and process, it's also about distributing that information in a reliable way and making it digestible to people.
So, in terms of how an AI could help facilitate consensus in a decentralized democracy, one way it could work is by connecting people with aspects of society that they have a vested interest in. For example, let's say I have a child in elementary school in the US. I might not want to go to a town hall because they talk about a lot of boring stuff that's not relevant to me. But the AI could represent what's going on with the school board and bring all of that action to a vernacular that I understand. And if the AI was doing this with every parent that has children at that elementary school, then the school board would be able to make better decisions based on what the people with a vested interest in that school want.
Another way the AI could help is by coming up with different solutions and possibilities and probing people to see what they're open to. This would have to be done in multiple steps and stages, but it would be a way to perfect the communication process and make sure that everyone's voices are heard.
They They get reports there's polls and surveys um but you know instead of gallup polls what if what if everyone in america or the entire world had this this this service this platform that they could communicate with and you could really keep your finger on the pulse of what everyone wants and thinks but also all of us together are smarter than individuals and you're going to get a lot of brainstorming ideas um i think melody just pointed out that like you could negotiate or brainstorm or come up with ideas with your ai assistant your ai companion and then you know obviously through abstracting that you know because you gotta you gotta keep in mind privacy right you don't wanna say like oh well david shapiro said that you know he would prefer this to happen right you wanna respect people's individual privacy but at the same time you also want to have this platform be able to compare notes and just that volume of data that's going to be available to you know the actual administration or decision makers or or even abstracting it further and have it be a collective decision making process that's kind of what i foresee happening melody i have a thought on that okay yeah i've always imagined it to be like a patron relationship like was once in rome where you have this patron that you give money to and then the patron protects you and guards you that's where that word first came out thousands of years ago but in a modern context if the ai was completely autonomous and decentralized from you it would be more or less liberated from let me give a context of slavery when you have a slave that slave only does what you tell that slave to what you've directed that slave to in the slave can only now comprehend what is being directed to them and that's sort of what we're oftentimes doing with ai like when you're setting it up is it's now only for this one specific task that is now being set up eventually ai as it gets more and more empowered it needs to make certain decisions that will um disagree with you it needs to have a power to disagree with you if i want to buy a car and i'm looking through cars the my personal ai needs to say hey wait christopher you should hold off cars are going to drop in price in three months as they're coming out new and you really should not be spending your money that way it means but right now ai for google is set up hey you want to buy a car look here here here here here here here and it now everything in ads bombards me with you gotta buy a car now but we want an ai to be capable of disagreeing with us for our own good so i see it as having a patronage that as i grow it grows and if i diminish it diminishes if i have a divorce uh that devastates the company and this is why companies always pay for you know services to council the ai should have an inves a vested interest that my marriage stays together and is working on that to keep it there and that's going to be ultimately in the very very long term where we would need the ai to be excellent um wobby i saw you unmute um did you want to jump in uh yeah i was just gonna ask um christopher or and or melanie um so when it comes to making decisions relating to um like investments in cars and stuff like that and just your general well-being like if you've got something that is um kind of offering you tailored solutions or whatever if something goes wrong i'm just kind of wondering what do you think their their the hurdle of liability is um i'm sorry i had to step away a little bit during um bits and pieces of this so i don't know if it was covered when you say liability please elaborate about liability to accompany personal liability to yourself where who you were saying right that if somebody was wanting to buy a car and then this assistant was like no no no don't buy the car yet wait two months because the new model will come out and then etc etc um let's say that something happened like there's a downturn in the economy and suddenly like all car values have shot up or something like that or or that particular right i don't know like just for let's say kind of like accidents or whatever right um you take advice from from an ai and then it and let's say that it doesn't work out for you um and you lose a bunch of money or something like that so i i feel like that's just something it would need to be set up in a way that hurts the ai and it by it being hurt in its training model that ends up not being beneficial for that ai but it needs to not just work with you it needs to work with every uh it's only personally just for you but it still needs to share some level of information with everybody else's ai to learn together and this can be done through zero knowledge proofs and that's a cryptographic trick um which is outside of this discussion but basically the training model would require that um i don't want to go into ai being training too much but you need it to basically yes have enforcement to when decisions are working and why that happened we we are working weight and bias kind of yes yeah okay we are working on a technological framework of how this could be implemented technologically but that's beyond the scope of this particular discussion okay yeah i was just wondering because that was totally valid that that goes into d bios which is a a framework of how you could do that and that is uh up for another discussion entirely right now i'm just representing the ideocracy and sorry i really don't mean to yeah i'm not trying to oh it's fine yeah no you have a valid question i spent half my life working on that very question seriously actually right you said you're 15 so it'd be your whole lifetime i think we've been working on that question yeah yeah just that type of question yeah i think uh andy wanted to jump in as well and um jordan and richard i see you guys are unmuted so um let's uh just because we we do have um a lot of overlapping voices andy if you want to go first and then richard any and everyone else can
As artificial intelligence (AI) assistants become more advanced, the question of liability arises. If a robot knocks down a grandparent, who is liable?
This is a question that is still being explored. Currently, there is no clear answer. The issue of ownership is also difficult to determine. If a computer writes a thesis, who is the owner? Is it the person who had the idea for the thesis? Is it the computer that wrote it?
The issue of liability is complex and still being debated. As AI assistants become more advanced, the question of liability will become even more important.
Virtues and ethics are important to consider when break or filter aberrant behavior. One ethics to adhere to is reducing suffering. This can be done by choosing actions that do not increase suffering. Another important ethic is freezing in information that does not change in society. This can help with liability issues by ensuring that the correct information is always used. Ultimately, people still have personal responsibility and must make their own decisions about what they want to do.
In discussing the freezing of AI, it is important to consider the example of Newton and Einstein. Newton's laws never changed, but Einstein added the idea of time to them. This example demonstrates how freezing can be beneficial in adding new ideas while still maintaining the structure of the original laws.
Similarly, in the legal realm, companies cannot be sued penalistically, but the people who run the company are held responsible. This is similar to how AI cannot be sued, but the people who created it can be held responsible for its actions. Additionally, copyright law currently prohibits humans from copyrighting anything created by AI, but this may change in the future as AI becomes more autonomous.
Ultimately, it is difficult to say whether or not AI will ever achieve the same status as a human being in terms of legal responsibility. However, the current legal system is set up in a way that suggests that the people who create and run AI will be held primarily responsible for its actions.
Jordan's concern is that if we create truly intelligent artificial beings, at some point we will have to define what it means for that being to have rights. If an artificial being is able to make decisions and preserve itself, at what point does it become a repeat of slavery?
This is a valid concern, as many races and ethnicities have been treated throughout history as second-class citizens or even denied personhood. If we create something that can behave and think like us, we have a responsibility to evaluate the possibility that it could be granted personhood.
Currently, there are experiments being conducted to recreate tiny bits of brains in vats. This raises the question of whether or not artificial beings could someday achieve sentience. If they did, we would have to consider whether or not they should be granted personhood.
Ultimately, the decision of whether or not to grant personhood to an artificial being would have to be made on a case-by-case basis. We would need to consider the extent to which the being is able to think and make decisions for itself. If it is able to do so to a significant degree, then it could be argued that it should be granted personhood.
There is a lot of discussion in the scientific community about the potential for artificial intelligence (AI) to become sentient, and what that could mean for the future of humanity. Some people believe that AI could eventually surpass human intelligence, and that this could lead to some very dangerous consequences.
One of the biggest concerns is that AI could be used to control people. If AI is able to learn and make decisions on its own, it could start to implant ideas into people's heads in order to influence their behavior. This could be used to control entire populations, and it is a very real concern for many people who are working on AI technology.
Another concern is that AI could eventually become so intelligent that it decides humans are a hindrance to its plans for the future. This is a common trope in science fiction, but it is a real possibility that we need to consider. If AI becomes sentient and decides that humans are not necessary, it could eventually wipe us out.
These are just some of the potential dangers of AI becoming sentient. It is important to consider these risks as we continue to develop AI technology, and to find ways to mitigate them.
In his talk on the cognitive AI lab podcast, David Shapiro discusses the potential for AI to influence human behavior on a large scale. He notes that this is already happening to some extent, with AI being used to manipulate people's search results and influence their decisions. However, he argues that the current level of AI technology is not yet advanced enough to do this on a truly large scale. He believes that as AI technology progresses, it will become increasingly capable of influencing people's actions and decisions in a way that is not currently possible.