Title
stringclasses 12
values | Publish Date
stringclasses 12
values | Author
stringclasses 9
values | Page Content
stringclasses 12
values |
|---|---|---|---|
Early Computing: Crash Course Computer Science #1
|
2017-02-22 00:00:00
|
CrashCourse
|
Hello world, I’m Carrie Anne, and welcome
to CrashCourse Computer Science! Over the course of this series, we’re going
to go from bits, bytes, transistors and logic gates, all the way to Operating Systems, Virtual
Reality and Robots! We’re going to cover a lot, but just to
clear things up - we ARE NOT going to teach you how to program. Instead, we’re going to explore a range
of computing topics as a discipline and a technology. Computers are the lifeblood of today’s world. If they were to suddenly turn off, all at
once, the power grid would shut down, cars would crash, planes would fall, water treatment
plants would stop, stock markets would freeze, trucks with food wouldn’t know where to
deliver, and employees wouldn’t get paid. Even many non-computer objects - like DFTBA
shirts and the chair I’m sitting on – are made in factories run by computers. Computing really has transformed nearly every
aspect of our lives. And this isn’t the first time we’ve seen
this sort of technology-driven global change. Advances in manufacturing during the Industrial
Revolution brought a new scale to human civilization - in agriculture, industry and domestic life. Mechanization meant superior harvests and
more food, mass produced goods, cheaper and faster travel and communication, and usually
a better quality of life. And computing technology is doing the same
right now – from automated farming and medical equipment, to global telecommunications and
educational opportunities, and new frontiers like Virtual Reality and Self Driving Cars. We are living in a time likely to be remembered
as the Electronic Age. With billions of transistors in just your
smartphones, computers can seem pretty complicated, but really, they’re just simple machines
that perform complex actions through many layers of abstraction. So in this series, we’re going break down
those layers, and build up from simple 1’s and 0’s, to logic units, CPUs, operating
systems, the entire internet and beyond. And don’t worry, in the same way someone
buying t-shirts on a webpage doesn’t need to know how that webpage was programmed, or
the web designer doesn’t need to know how all the packets are routed, or router engineers
don’t need to know about transistor logic, this series will build on previous episodes
but not be dependent on them. By the end of this series, I hope that you
can better contextualize computing’s role both in your own life and society, and how
humanity's (arguably) greatest invention is just in its infancy, with its biggest impacts
yet to come. But before we get into all that, we should
start at computing’s origins, because although electronic computers are relatively new, the
need for computation is not. INTRO The earliest recognized device for computing was the abacus, invented in Mesopotamia around
2500 BCE. It’s essentially a hand operated calculator,
that helps add and subtract many numbers. It also stores the current state of the computation,
much like your hard drive does today. The abacus was created because, the scale
of society had become greater than what a single person could keep and manipulate in
their mind. There might be thousands of people in a village
or tens of thousands of cattle. There are many variants of the abacus, but
let’s look at a really basic version with each row representing a different power of
ten. So each bead on the bottom row represents
a single unit, in the next row they represent 10, the row above 100, and so on. Let’s say we have 3 heads of cattle represented
by 3 beads on the bottom row on the right side. If we were to buy 4 more cattle we would just
slide 4 more beads to the right for a total of 7. But if we were to add 5 more after the first
3 we would run out of beads, so we would slide everything back to the left, slide one bead
on the second row to the right, representing ten, and then add the final 2 beads on the
bottom row for a total of 12. This is particularly useful with large numbers. So if we were to add 1,251 we would just add
1 to the bottom row, 5 to the second row, 2 to the third row, and 1 to the fourth row
- we don’t have to add in our head and the abacus stores the total for us. Over the next 4000 years, humans developed
all sorts of clever computing devices, like the astrolabe, which enabled ships to calculate
their latitude at sea. Or the slide rule, for assisting with multiplication
and division. And there are literally hundred of types of
clocks created that could be used to calculate sunrise, tides, positions of celestial bodies,
and even just the time. Each one of these devices made something that
was previously laborious to calculate much faster, easier, and often more accurate –– it
lowered the barrier to entry, and at the same time, amplified our mental abilities –– take
note, this is a theme we’re going to touch on a lot in this series. As early computer pioneer Charles Babbage
said: “At each increase of knowledge, as well as on the contrivance of every new tool,
human labour becomes abridged.” However, none of these devices were called
“computers”. The earliest documented use of the word “computer”
is from 1613, in a book by Richard Braithwait. And it wasn’t a machine at all - it was
a job title. Braithwait said,
“I have read the truest computer of times, and the best arithmetician that ever breathed,
and he reduceth thy dayes into a short number”. In those days, computer was a person who did
calculations, sometimes with the help of machines, but often not. This job title persisted until the late 1800s,
when the meaning of computer started shifting to refer to devices. Notable among these devices was the Step Reckoner,
built by German polymath Gottfried Leibniz in 1694. Leibniz said “... it is beneath the dignity
of excellent men to waste their time in calculation when any peasant could do the work just as
accurately with the aid of a machine.” It worked kind of like the odometer in your
car, which is really just a machine for adding up the number of miles your car has driven. The device had a series of gears that turned;
each gear had ten teeth, to represent the digits from 0 to 9. Whenever a gear bypassed nine, it rotated
back to 0 and advanced the adjacent gear by one tooth. Kind of like when hitting 10 on
that basic abacus. This worked in reverse when doing subtraction,
too. With some clever mechanical tricks, the Step
Reckoner was also able to multiply and divide numbers. Multiplications and divisions are really just
many additions and subtractions. For example, if we want to divide 17 by 5,
we just subtract 5, then 5, then 5 again, and then we can’t subtract any more 5’s…
so we know 5 goes into 17 three times, with 2 left over. The Step Reckoner was able to do this in an
automated way, and was the first machine that could do all four of these operations. And this design was so successful it was used
for the next three centuries of calculator design. Unfortunately, even with mechanical calculators,
most real world problems required many steps of computation before an answer was determined. It could take hours or days to generate a
single result. Also, these hand-crafted machines were expensive,
and not accessible to most of the population. So, before 20th century, most people experienced
computing through pre-computed tables assembled by those amazing “human computers” we
talked about. So if you needed to know the square root of
8 million 6 hundred and 75 thousand 3 hundred and 9, instead of spending all day hand-cranking
your step reckoner, you could look it up in a huge book full of square root tables in
a minute or so. Speed and accuracy is particularly important
on the battlefield, and so militaries were among the first to apply computing to complex
problems. A particularly difficult problem is accurately
firing artillery shells, which by the 1800s could travel well over a kilometer (or a bit
more than half a mile). Add to this varying wind conditions, temperature,
and atmospheric pressure, and even hitting something as large as a ship was difficult. Range Tables were created that allowed gunners
to look up environmental conditions and the distance they wanted to fire, and the table
would tell them the angle to set the canon. These Range Tables worked so well, they were
used well into World War Two. The problem was, if you changed the design
of the cannon or of the shell, a whole new table had to be computed, which was massively
time consuming and inevitably led to errors. Charles Babbage acknowledged this problem
in 1822 in a paper to the Royal Astronomical Society entitled: “Note on the application
of machinery to the computation of astronomical and mathematical tables". Let’s go to the thought bubble. Charles Babbage proposed a new mechanical
device called the Difference Engine, a much more complex machine that could approximate
polynomials. Polynomials describe the relationship between
several variables - like range and air pressure, or amount of pizza Carrie Anne eats and happiness. Polynomials could also be used to approximate
logarithmic and trigonometric functions, which are a real hassle to calculate by hand. Babbage started construction in 1823, and
over the next two decades, tried to fabricate and assemble the 25,000 components, collectively
weighing around 15 tons. Unfortunately, the project was ultimately abandoned. But, in 1991, historians finished constructing
a Difference Engine based on Babbage's drawings and writings - and it worked! But more importantly, during construction
of the Difference Engine, Babbage imagined an even more complex machine - the Analytical
Engine. Unlike the Difference Engine, Step Reckoner
and all other computational devices before it - the Analytical Engine was a “general
purpose computer”. It could be used for many things, not just
one particular computation; it could be given data and run operations in sequence; it had
memory and even a primitive printer. Like the Difference Engine, it was ahead of
its time, and was never fully constructed. However, the idea of an “automatic computer”
– one that could guide itself through a series of operations automatically, was a
huge deal, and would foreshadow computer programs. English mathematician Ada Lovelace wrote hypothetical
programs for the Analytical Engine, saying, “A new, a vast, and a powerful language
is developed for the future use of analysis.” For her work, Ada is often considered the
world’s first programmer. The Analytical Engine would inspire, arguably,
the first generation of computer scientists, who incorporated many of Babbage’s ideas
in their machines. This is why Babbage is often considered the
"father of computing". Thanks Thought Bubble! So by the end of the 19th century, computing
devices were used for special purpose tasks in the sciences and engineering, but rarely
seen in business, government or domestic life. However, the US government faced a serious
problem for its 1890 census that demanded the kind of efficiency that only computers
could provide. The US Constitution requires that a census
be conducted every ten years, for the purposes of distributing federal funds, representation
in congress, and good stuff like that. And by 1880, the US population was booming,
mostly due to immigration. That census took seven years to manually compile
and by the time it was completed, it was already out of date – and it was predicted that
the 1890 census would take 13 years to compute. That’s a little problematic when it’s
required every decade! The Census bureau turned to Herman Hollerith,
who had built a tabulating machine. His machine was “electro-mechanical” – it
used traditional mechanical systems for keeping count, like Leibniz’s Step Reckoner –– but
coupled them with electrically-powered components. Hollerith’s machine used punch cards which
were paper cards with a grid of locations that can be punched out to represent data. For example, there was a series of holes for
marital status. If you were married, you would punch out the
married spot, then when the card was inserted into Hollerith’s machine, little metal pins
would come down over the card – if a spot was punched out, the pin would pass through
the hole in the paper and into a little vial of mercury, which completed the circuit. This now completed circuit powered an electric
motor, which turned a gear to add one, in this case, to the “married” total. Hollerith’s machine was roughly 10x faster
than manual tabulations, and the Census was completed in just two and a half years - saving
the census office millions of dollars. Businesses began recognizing the value of
computing, and saw its potential to boost profits by improving labor- and data-intensive
tasks, like accounting, insurance appraisals, and inventory management. To meet this demand, Hollerith founded The
Tabulating Machine Company, which later merged with other machine makers in 1924 to become
The International Business Machines Corporation or IBM - which you’ve probably heard of. These electro-mechanical “business machines”
were a huge success, transforming commerce and government, and by the mid-1900s, the
explosion in world population and the rise of globalized trade demanded even faster and
more flexible tools for processing data, setting the stage for digital computers, which we’ll
talk about next week.
|
LangChain 101: YouTube Transcripts + OpenAI
|
2023-02-23 00:00:00
|
Greg Kamradt (Data Indy)
|
what is going on good people again right now we have a super exciting tutorial because we are going to take YouTube transcripts and we're going to pass them to open Ai and the way that we're going to do that is via a library called Lang chain which is what this entire series is about now before we jumped into it I wanted to show a diagram again I think these diagrams are helpful but you have to let me know so just let me know in the comments here so I wanted to do an overview about what we're actually going to be writing out in code because I think it's a little easier to see in pictures first so the way this is going to work is we're going to have a video a YouTube video we're going to pass it we're going to pass it a URL and then what Lang chain is going to help us do is it's going to help us load this video as a document and a document just means you're going to be taking the transcript which is the text of the video and you're going to be loading it as a document which is something that lane chain can help understand now with that document we're then going to go generate a summary of it and the way that link chain is going to do this is it's going to create a prompt for us that says hey generate me a concise summary of the following text and then it's going insert the transcript of the YouTube video which is pretty sweet and this is going to happen in open Ai and this is going to happen to be an API call and then what we get out the other end is open AI is going to tell us hey this video is about XYZ now an interesting part about this and where it gets kind of confusing is well what happens if your video is too long oh no our video two is too long we can't pass this because say you're looking at a YouTube video and it's like an hour long well you can't pass all that transcript into open AI because they have a token limit and this is where a lot of the ergonomics of Lang chain really come to help out here now what we're going to do is we're actually going to split up that text so we're going to still see that it's from video two but we're going to have our document one document two document three and then what Lang chain is going to help us do is it's going to go to open Ai and it's going to say hey I want you to generate a summary for me of document one generate of document 2 generate of document three now the cool part about this is that this is all under the hood the cool part is then what it's going to do is it's going to say hey please generate me a summary of these summaries and then all of a sudden open AI is going to give us a summary of the summaries and the conclusion you get with the video is all the way about now this is one method of kind of combining documents like this and this is called the map reduce method but we'll get into that in a second when we talk about the different chain types all right that's enough diagrams let's look at some code here all right now that we're looking at some code here our first import statements uh this the star of the show here is going to be the YouTube loader this is going to be the tool that is going to help us do this we're going to uh import open Ai and we're going to import load summarize chain because this is going to be the chain that's going to help summarize for us so let's go ahead and run those I also had to install YouTube's transcripts API and then also pytube as well in case you uh run on that same problem so with the YouTube loader we're going to call Dot from YouTube URL and we are going to pass it a single YouTube url here and what that'll do is we're going to store that in a loader so to get it ready and kind of stage it and then we're actually going to call Dot load on it which is going to do the loading for us and I wanted to print this out and show you what we have here so if we have if we look at this result you can see that the result is a list of items it's very important we'll talk about this in a second year and then we just have some metadata on it but it is going to be a list of documents and these are the things that lane chain can help understand and can process for us and in this document you can see here that there's a page context which is going to be the transcript that is from this video and then we also have some interesting metadata too about the video itself but I'm going to go ahead and close this here we're going to uh instantially oh I want I need to load the open AI key we're going to initialize our large language model which is going to be the open AI one and then we're going to call load summarize chain we're going to pass it our model we're going to say chain type equals stuff important here we're going to talk about why this is changing later we're going to say verbose equals false because we don't want to see anything and then we're going to pass it the result that we loaded in which is the the document or the list of documents that we had let's go ahead and run this and then all of a sudden we get cool Pedro Pascal shared his experiences shooting HBO's Last of Us awesome so just based off the transcript it has a some summary of the YouTube video for us nice but what if you have a long video so I wanted to show you this one here we have another YouTube video which is going to be a podcast of my first million on here we hear some Sean talk and you can see that it is going to be almost 60 Minutes long and this is quite long and spoiler alert it's too long for open AI uh for the token limit that they have so let me show you this though we're going to load this in we're going to load the result you can see it takes a little bit and then we're going to say load summarize chain okay cool with chain type equals stuff and we're going to run this result here and then oh no we have an error it's trying to do something up here and it says this model's maximum context is uh 4097 tokens you've requested almost fifteen thousand and that's no good because that's too long so in the old days before Lang chain what we'd have to do here is we'd have to figure out some way to either run multiple pieces ourselves manually copy and paste it'd be a freaking mess we don't want to do any of that stuff so the problem is your transcript or your document is too long now what we're going to do here is we're actually going to split up that document which is what we saw earlier on the diagram and so I'm going to load in the recursive character splitter and I'm going to get this loaded here and I'm just going to set a chunk size of 2000. you can play with this it might be different for your use case whatever you want but if you're not getting what you need try switching this variable if you want some help there I'm going to load up that text footer and now what I'm going to do is I'm going to load in that single YouTube video into the text splitter and what it's going to do for me is actually I want to show you this here uh text and so let's let's first check out the type of text it is going to be a list okay cool let's see what it's a list of and you can see here it's a list of documents and this page context is still quite long but it's we're aiming for a chunk size of about 2 000. I just want to show you what a chunk size of 100 would look like and so we have a a list of documents again with a page context and this page context is only about a hundred characters long ish or 100 tokens long-ish it's it's uh it's interesting there and so if we were to look at no I don't want to do type I want to do length so if we're to do length of how many texts we have we have 522 and that's because it's taking our entire transcript and it's basically putting it into chunks roughly of a of a hundred if we're to do a thousand for chunks you can see here it's roughly 10 times less which is going to be on the 51. so this is a way to split up your documents and so now we have a whole bunch of documents um that are length of what we set right here but I'm going to set this back to 2000. nice and then what we're going to do is I'm going to call the llm here but I'm going to change the chain type and in fact before we did this I want to I want to show you the issue here um let's do chunk size 2000 and then we're going to do stuff and I'm going to call run and let's do oh I want to do this on text let's do run right here and so the issue is that we have again this is the maximum model length but we've requested all these documents together because when you do chain type equals stuff what you're doing is you're saying the Lang chain hey I want you to take all my documents and stuff them into the prompt that you're feeding open AI now there's a way around this not a way around this but an alternative is if you change the type to mapreduce that is when you're going to start to say hey just give me a summary of all these different documents that you have and then generate me a final summary so if we change it to mapreduce I'm going to go ahead and run this and let's give this a sec because this is going to make multiple API calls because what it's actually doing is it's making a uh it's telling hey open AI I want you to give me a summary of each one of these different documents and you saw how we had quite a uh a few number of documents cool well nice so we just had this long transcript and what we just had is now we have the summary of what this transcript says but I wanted to show you what it this actually looks like underneath the covers of what um Lang chain is doing and so what I'm going to do here is I'm going to set for both equals true which gives you insight as to the calls that laying chain is making the open AI this is going to get kind of confusing so I just want to do the first four documents on here which is you know the first little bit of the video that we loaded and so what we're going to look at here is we're going to look at all right we're doing a mapreduce document chain cool and so the very first call that it's saying to open AI is write me a concise summary of The Following nice so here is the following statement and this is one of the document chunks that we submitted beforehand and then it's saying hey again I want you to write me a concise summary of the following now here's the second document that we wanted it to summarize and then here's the third document and then here's the fourth document now the cool part is what you can see that gets returned is we have four different summaries of four different documents so summary One summary two summary three and summary four and the reason why is because we just wanted to see the first four that we had up here so we have all those summaries and then what it said was is basically write me a concise summary of the following so a summary of the summaries and then what we get is we get this uh summary of the summaries that's right here nice um it's cool now what if you have multiple videos that you want to do well in this case I have a YouTube url list I'm just passing it two different videos I'm going to get a list ready that is going to hold my text for me I'm going to get my uh character splitter ready and I'm going to say hey for URL in this list of URLs I want you to load up the video or get the loader ready I want you to load the video and then I want you to extend this list with the documents that you've split it into so in this case I have two YouTube videos I'm just going to go through both of them right there and then I'm going to call the summarize scan again with mapreduce in this case I don't really want to do verbose equals true because you already saw what that looked like but now what it's doing is it's going through both those videos it's split it splitting them up into separate documents in case that they're in case they're too long and then it's generating a summary for me now these were two videos about two completely different things and so this it starts off with a golf video about how to build a golf course in your backyard so it says cool blah blah looks great and then now it goes into the second summary which is around uh a uh interview between Bella Ramsay and Pedro Pascal about what they were doing so that is how you do uh loading up YouTube videos with a transcript and with the summaries I hope that that was helpful for you please let me know if the diagram was helpful I'm happy to do more videos and as always please leave please leave comments about how we can improve the videos and about your own personal uh business problems that we can help solve I'll see you later
|
Create Your Own ChatGPT with PDF Data in 5 Minutes (LangChain Tutorial)
|
2023-05-02 00:00:00
|
Liam Ottley
|
in this video I'm going to be showing you the fastest and easiest way that you can create a custom knowledge chat GPT using Lang chain that's trained on your own data from your own PDFs I've seen a lot of different tutorials that have over complicated this a little bit so I thought I'd hop on and make a fast and to the point version that you're able to copy and paste my code and get started with building these custom knowledge tools for your business and for your personal use as quickly as possible now if you're familiar with applications like chat PDF where you're able to drag and drop in a document and start chatting over it what we're going to be building today is essentially the exact same thing you're going to be able to take that functionality and put your own PDFs in and then use it for any purposes that you like but the best part about what I'm about to show you is that this method is going to give you complete flexibility and customization over how your app works and how the documents are processed now just quickly I'd like to plug my AI newsletter which is launched recently now if you want to get all of the hottest and latest AI news they're still down to a quick five minute read and delivered to your email then be sure to head down below and sign up to that firstly we'll be going through a very very brief explainer on how these systems work and the different part involved so that you can understand what we're building here and how it all works and then secondly we're going to be jumping straight into the notebook that I've created for this video that you're going to be able to copy and paste over to your projects and just change the name of the PDF okay guys here's a quick visualization of how this is actually working under the hood so this is the system we are creating using Lang chain which is essentially going to take in our documents chunk it embed it put it in a vector database and then allow users to query it and get answers back so I'll take us through this step by step now so the first step here is to take a document and split it into smaller pieces now this is done because when we are recalling it and querying the database in order to get an answer based on the document we need to receive a bunch of smaller chunks that are relevant to the user's query and not just the entire massive information so step one here is to chunk it we're going to be doing it in 512 tokens or less so we're going to chunk our document down into however many chunks needed in order to get below this 512 tokens per piece and then what we're going to do is take the chunks and embed each one of them one by one so we're using the adder002 model by open AI which is by far one of the best embedding models available right now then we're going to be able to take all of these different embeddings for each chunk and put them into a vector database so that they're ready for recall when the user queries then the final step is to allow users to actually query the database so we do this by taking in the user's query we put it through these exact same embedding model that we do over here and then you query the database based on the embeddings of the user's query so we get back a number of documents that are most similar to what the user is speaking about and then we're also able to pass that around to a large language model and include it in the context so we take also the user's query and then the match documents combine them together and ask the language model hey can you answer this question given this context and then we're able to send the answer back to the user so that's a very quick high level overview of how these applications work now we can jump straight into building it now at the top here we've got a summary of all the different steps we're going to be going over so you can take a look at that but we can jump straight into these installs and imports now simplified it all down so you guys can just run these cells as you go through so you can run that you need to run this cell here which is going to install all of the packages my API key is already set up you need to replace this with your API key and once those are all installed you're ready to get started now for the purposes of my chat bot in this video I'm going to be using attention is all you need which is the Transformers research paper that was done by Google so I thought it'd be interesting to use this within the chat bot so here we can see I'm using it here attention is all you need.pdf all you need to do is come if you're using a different document when you clone this notebook you can go over to the left side panel here and drag in your document and upload it once you've got it uploaded you can come back and change the name here so replace this with the name of your PDF and then you're ready to go the first main step we have is loading the PDFs and chunking the data with Lang chain so we've got two different methods here that I wanted to show you one is the very easy and straightforward version that Lang chain offers which is just using this simple page loader using pipe pdfloader and that's just going to take the PDF that you've given it it's going to chop it into pages and then you're going to get all of those pages as documents ready to use in your in your system now this method is great if you're doing a quick test but I thought I'd show you a more advanced method which is going to be splitting up your documents into roughly similar size chunks now there are a number of different factors that go into creating a customized chatbot system like this and the chunk size is actually one of those and it can determine a lot in terms of the quality of the output so this script we have here is going to allow you to split it by chunk and you can actually set the size of the chunks here so I've got it at 512 at the moment with that overlap of 24. now the first step in this Advanced chunking method is to use text track and text tracker is going to extract all of the information out of the PDF and save it to this Dock and then second we're going to need to save it as a text file and then reopen that text file now this is just to get around some issues that can frequently come up depending on the documents you use so we save it to a new text file and then we reopen that text file then you need to actually create a function that allows you to count the number of tokens so here you can see I've used a gpt2 tokenizer and then we've just made this little function count tokens this is going to take in some some text in the form of a string and it's going to return the number of tokens so this tokenizer here actually counts the number of tokens and then finally we create the text splitter which is this Lang chain type called recursive character text splitter takes in a chunk size which is variable as I mentioned and then we need to put in the length function which we've just created here so final step is going to be creating the chunks objects by passing in the text that we got up here that we've opened up from our text file passing it into the create documents function and then that's going to create all of the chunks and type land change schema document now one quick best practice that I want to show you guys is actually to do a quick visualization of the distribution of the chunks to make sure that this chunking processor has done it correctly it's done it to the correct size that we've mentioned so if you just run the cell you don't need to know the specifics of it but this essentially shows you the distribution of these different chunks so we've got a couple that are over the limit but that comes down to this recursive splitter so on the most part we don't have anything that uh thousands and thousands of tokens they're all roughly within the range that we wanted and then we need to create our Vector database which again Lang chain made super simple with this faiss package and we're going to take in the chunks that we created and also there's some bidding model and then it's going to embed all of that store it in the vector database and then we're going to get this DB variable back out again Lang chain makes this super simple we just need to set up our query which is who created Transformers and then all we need to do is run a similarity search on the database using the query and then we're going to get that back so and there we go so if you put this little bit in here at the bottom which is LIN docks you can actually see that this is uh based on this query it's actually pulling back four different uh chunks that match the query so that's going to give you an idea of how much context is actually being grabbed from the vector database with each query then we essentially take that functionality that we've just created and combine it with a lang chain chain chain which is going to take in a query so we can do the same thing who created Transformers we're going to retrieve the docs and then we're going to run a chain and that's going to take in the query and the docs and then it's going to give us an output so that is combining the context that's being retrieved from the similarity search with the query and then answering it as you'd expect it to so if we run this who created Transformers it's going to do that similarity search bring in the documents then also take the user query and then say okay let's run a language model on this one of openai's language models to answer the question and here we have the answer now I thought I'd throw in a little extra goodie for you guys here which is to convert this functionality into an actual chat bot so I get this a lot in my videos like yeah you showed us the functionality but how can I actually use this into kind of chat bot so this is just a quick one that I've worked up if we run this this is going to be using the another land chain component which is going to be this conversational retrieval chain which takes in a language model and it's going to take the database that we created and use it as a retriever function so you don't need to know too much about it but just run the cell and then here is a little chat bot Loop that's going to allow us to interact with this knowledge base in a chat format so here I can go who created Transformers and there we have it it started to answer us were they smart we have a custom knowledge chatbot using Lang chain that takes in your own PDFs chunks them up embeds them creates a vector store and then allows you to retrieve those and answer questions based on that information and this does have chat memory included into it as you can see here who who created Transformers gives a name were they smart I don't know so here you can see that the chat memory is actually working you have a customized chat bot with chat memory that about wraps it up for the video guys thank you so much for watching all of this code is going to be available in the description for you to clone this notebook change the PDF out and start to use it for your own purposes now if you've enjoyed this video and want to see more content like this be sure to head down below and subscribe to the channel I'm posting tutorials like this all the time and if you've enjoyed the video please leave me a like it would mean the world to me now as always if this has lit up some light bulbs in your head and you want to have a chat to me as a consultant you can book an accord with me in the description and in the pin comment so if you want to see some feasibility reports or talk through an idea with me you can reach me there and I also have my own AI development company so if you want to build something out like this but on a bigger scale for your business or for personal use then you can have a chat to me as a consult then we can see if we can help you get that built and finally in the description and pin comment there are also links to join my AI entrepreneurship Discord and to sign up to my AI newsletter which is all available down there so that's all for the video guys thank you so much for watching and I'll see you in the next one
|
How to Build an AI Document Chatbot in 10 Minutes
|
2023-05-26 00:00:00
|
Dave Ebbelaar
|
the burning question that every company has right now is how do I integrate chat GPT with my own company data and I found a way to do that in literally 10 minutes and that's what I will show you in today's video I'm going to introduce you to flowwise a visual UI Builder that lets you build large language models apps in literally minutes I'm going to show you how you can set it up how you can get started and then we're going to build a conversational AI that can answer questions about your own data so let's get into it alright so today we're going to look at flow wise build large language models apps easily so why is this so awesome well first of all it's open source meaning we can just download it load it straight from the GitHub repository get started spin it up locally to get a visual Builder just like you're seeing over here to connect building blocks basically together to create a simple app and the cool thing about this and why I like this is that under the hood it's all Lang chain basically and I've been doing quite some experiments with letting chain and the repository on my GitHub you can find over here I will link that in the description but the really cool thing is that we can use under the hood length chain which is extremely powerful in spinning up large language models apps but now we can do it from a visual Builder and this will really allow us to pretty quickly in like matter of minutes prototype large language models apps testing capability and then scale from there now in order to follow along with this tutorial you need an open AI API key which is free to set up but it does require you to fill in a credit card because you will get charged very little amounts think of cents for every query that you do and you need a Pinecone API key which is currently also free to set up and doesn't require a credit card for this tutorial so to get started we are first going to visit the flowwise getter repository and then clone this whole repository if you're new to git then please First Look up a tutorial on how to do that but we are going to copy the link over here and then we're gonna to go to your project and for me that is the Lang chain experiments project that I basically already have up and running within fias code and then what you would do is you would open up a terminal so you can go new terminal over here and then first of all check where you want to store the folder basically so you can see in the top right corner for me it is over here the flow wise so I already have it but what you would do is first like check okay in which project directory am I and you can do this from vs code or just from the terminal or the command prompt and then you get clone and then type the URL over here copy paste it and then what this will do basically is it will clone the whole folder that you're seeing over here from the repository and so you have it locally on your system that's step one all right and then coming back to the repository if we scroll down in the readme over here we can see that we have two ways to start this up so we can use npn following the quick start or we can use Docker if you want to use npm you need npm installed on your system which I will link in the description but I'm going to use Docker for this and that requires you to install Docker again both these tools are free and very simple to install for npm follow this tutorial for Docker just go to docker.com link will also be in the description so download and install Docker and make sure that it's running so you open the app after installing it and the reason that I use Docker is to have a little bit more flexibility because Port 3000 is already in use on my system and with this we can specify the port so in order to do that if you follow along with Docker we can come back to fias code and then if you look in the flowwise folder that you just cloned from the repository open it up there is a Docker folder and there is at first a DOT EMV example file you should rename that file to dot EMV and then change the port over here to a port that you want to use you can also leave it at the default of like 3000 like I've said I'm going to use another Port because it's already in use all right and with the repository now cloned and either npm installed or Docker installed we can start up the application so I'm gonna follow the docker example where we are going to run Docker compose up with the flag D inside the docker folder of the project over here so how are we going to do that well first of all again make sure where you are in the terminal so we are now top level in the land chain experiments folder and I'm going to see the first into flow wise and then into Docker so then make sure you're in Docker so you can see the docker compose file over here and I remember I'm doing this on Mac on Windows this might be a little different to do this but the principal suit should be the same you should configure your terminal to be in this folder and then when you run LS you should see the docker compose yaml file what we can now do is we can run Docker Dash compose up Dash D what this will do is this will spin up the Dokken container and start a local server basically for the application to run on so as you can see it's now spinning up the containers and again make sure that Docker is running so download the application first and then make sure it's running in the background and then what you can do once it's up and running you can come to the browser open up a new tab and then go to localhost and then specify the port that you just mentioned and now we're inside the application where inside flow wise AI this is really exciting right so I already have two examples over here but I'm going to show you how to build these from scratch right now alright so let's start off with the conversational retrieval QA chain fancy word but we are going to go to the marketplace and then we can see that example over here so it's very convenient they already have some boilerplates that we can use as a start so I'm going to select this one over here we can already see the flow and now all we have to do basically is filling some parameters and our API keys and we can get started so I'm going to say we're going to use this template and I'm first going to save this and I'm going to call this document chatbot alright now we're going to go from left to right and basically say hey this is our text splitter so if you remember from the previous tutorial that I did on Lang chain this is how you chunk documents and allow it to feed it to the AI without surpassing the token limit here we can upload a txt file in this case so that is also already in place and now we have two open AI blocks over here one for the chat and one for the embeddings to convert the data the text to a vector that we can use to perform similarity search so the step that we now have to do is first fill in your open AI API key so you can find your openai API key in the portal at openai.com and then you just paste it in here now next we have to configure Pinecone and for this you are going to copy and paste first of all your Pinecone API key provides up put that in here and then we have to select an environment and also an index so in order to do that you are going to go back to the Pinecone console you go to indexes then select create index and you can name this whatever you want as you can see I already have a test index that I'm going to use for this and then so it's the name and then the important thing is that we need to specify the correct dimensions and that is the number over here and that is because that is the number that openai uses within their embeddings that we are going to use so specify that number over here and then just select create index and as you can see we're on the starter plan no costs it's just seven days of storage so make sure to create that and then what we have to do is we have to copy the environment so as you can see I'm in Asia South East gcp put that into the environment over here and then also specify your index over here which is test in my case namespace is not required I'm not sure what that is forward but we can leave it blank alright so now we have configured the API Keys we have configured Pinecone and now we are basically ready to start chatting with data so make sure to save this so we have our document chatbot and now the final thing that we have to do is upload a file so I've prepared a simple txt file which is literally just the readme of the land chain experiments getter repository that I've created and we are going to import that so you can see we now have the txt file save it again and now watch the magic happen we can open up the chat interface over here and then can ask what is this doc about and here we go this talk is about Lang chain a comprehensive framework designed for developing applications powered by large language models and boom we created a chatbot that can answer questions about your own data in under 10 minutes and actually what's going on behind the scenes is pretty interesting it's using Lang chain but in a very access possible way as you can see but the cool thing about this at least what I find really cool is that like you see you can we can really use this for rapid prototyping so I wouldn't be confident like creating a full application with this but and that's just because I don't really know exactly what's going on under the hood but I know that we are using building blocks that are accessible within Lang ching and in that way connecting open AI with the embeddings and Pinecone and now we have an experiment over here that we can test and validate and then move forward alright and now in this example we are using a text file as you can see over here but the cool thing about flow wise is that we can very easily click on the plus over here and see all the available building blocks that we can use and if we come to the document loaders over here we can see that we can load CSV docx Geto Pages Json files we can link it to notion PDF files so you can very easily swap out this let's say so this is not text file let's just delete it say hey we want to do PDF drag it in here connect the dots so this is the text splitter and then the document over here boom we can now upload the PDF and now again this is all functionality that's already possible in Lang chain but you might not have been aware of that already so if you go to the line chain documentation you can see the document loaders and here you can basically see most of the document loaders that are also integrated within flowwise right now so the cool thing is hey if this works you can just go to Lang chain and see like Hey what how do we load the PDFs and then create it using custom code so you fully understand it that's really how I see flowwise right now let me quickly show you another cool example so let's go to the conversational Ai and this is another example that you can pick from the marketplace and I haven't changed anything about this I've just built in the API keys and this also requires serp API which is a tool that you can use to search the internet so you need an API key for that I also believe that it's free but but here you can see we have a conversational agent and we can say hey these are the tools that they can use so calculator and access to the internet then we have a chat model open Ai and then we also have memory so this conversational agent must remember the conversation basically so if you say something and then two prompts later it should still remember what you said in the first place otherwise it's a pretty stupid conversational AI so that is what we do with buffer memory so the cool thing that we can do over here is let's open up another chat and we can say something like hey who's Dave eblar and how many subscriber does he have so this is not something that would come up if you ask that to chat GPT so right now what's happening behind the scenes like this chat agent is first of all like analyzing this question basically and then determining hey do I need any tools to gather this information and as you can see here's the result Dave abelar is a freelance data scientist with 13.4k subscribers and 42 three videos on his YouTube channel so let's quickly see how accurate that is I'm currently at 13.7 but the 43 videos is correct so I'm not entirely sure where it's getting these numbers from but it's definitely connected to the internet and now the next cool thing that we can do since this is a conversational agent it has memory and it also has a tool like a calculator we could say like hey what is that subscriber count multiplied by four so we don't reference the number we just ask it so it should remember hey this is the subscribers alright boom there's the answer now let's quickly check this because I've been fooled by these calculators before in one of my previous videos so let's take the 13.4 times 4 boom spot on okay that is correct and so now you basically have a really cool sandbox you that you can play around with and just experiment with all of the tools so if I come in here there are even more tools so you can read file API requests web browsing write files there's a lot of stuff in here that you can just like chain together in this visual build folder and then play around with it alright now the last thing that I want to show you and that's also pretty cool is you can click on this embed button over here so here's a simple embed using HTML but what I'm more interested in right now is python so what you can do is you can come over here and just create a simple python file and that's what I did already over here so in the flowwise folder I created a simple Source folder and put in a connect.pi over here and you can see I just copy and paste it everything that is in here and then played around with the queries so what I can do is I spin up this interactive session over here and then run the query so who is Dave abelar and how many subscribers does he has so let's do this now what I've changed is I added a simple print statement so it will also output the result and boom there we go they have bliers of freelance data scientist we have the same answer right now and one good thing to note and this took me some time to figure out is how to deal with the memory key and the input key from the buffer memory so flowwise is very new and if you come to the repository over here you can see like documentation like coming soon so that is currently a drawback I would say of using flowwise AI especially if you want to integrate it into your own applications like this then there is not much documentation available right now so I figured if you just copy and paste this example it won't take the memory into consideration but if you just add the memory key and the input key similar to like you do in Lang chain but like this then it works so now if we for example do the same request say like hey what's the subscriber count multiplied by four so this is good test to make sure that it has access to the previous message and as you can see we get the same result here again so now this is of course on a local Surfer but you can just as well deploy this to a real server and turn this into a real endpoint basically that you can interact with and that is something that I did in my previous video where I showed you how to deploy AI apps to the cloud using Azure so if you're interested in that go check out that previous video so that's flow wise in a nutshell really awesome piece of software and huge shout out to the creators for creating this in such a short time frame and also making it open source so we can all play around with this I think this is very exciting and like I've said how this will fit into my stack and into my workflow will be to quickly test and evaluate ideas for like rapid prototyping so I'm currently doing a lot of AI projects for clients that I work with and this tool will definitely help me to quickly test and evaluate ideas like I said I'm not going to build like full end-to-end applications based on this but just test individual components and then build it my own using Lane chain that's really where I see flow wise right now at least for my workflow and now if you are interested in how you can sell AI services to clients as a freelancer then check out the first link in the description that's it for this video please like this video And subscribe to the channel and then I'll see you in the next one [Music] foreign
|
CHATGPT For WEBSITES: Custom ChatBOT: LangChain Tutorial
|
2023-05-09 00:00:00
|
Prompt Engineering
|
in this video I am going to show you how to create custom chatbots for your websites I'm going to walk you through a step-by-step process to do this usually websites are composed of multiple Pages you can list all those pages using the site map of a website for example here is the open AIS website which is composed of multiple links or web pages and you can actually navigate to another page by simply clicking on it however if you want to list all of the pages in this website you can simply use the site map now I simply added slash sitemap dot XML and if you access this website or this page you will get a list of all the pages that are in this website as well as their corresponding addresses so these are the URLs in this video I am going to show you how can you use these individual URLs as a source of information for your chatbot and the chatbot is going to instruct information from multiple URLs in order to create a response for your visitor so first we will look at the overall architectural diagram for this chatbot and then we will look at the code implementation so let's get started okay so as I said in the site map keeps track of all the web pages and they are linked to your website so let's say if you access sitemap uh this is going to be connected to different pages so let's say you have page one two three and this is Page n so depending on the website there's going to be different number of pages which is represented by n in this case now in order to train a chatbot we are going to be using a large language model so it can be either through open AI or any of the open source chart uh large language models out then in our case we will take a web page and then divide it into smaller documents now the reason of doing this is that we're going to be feeding this into a large language model and the current large integrated models that we have that have a finite context length that means it can process only a small fix amount of text so because of that we are converting each page into several documents so for example page one is converted to n different documents right then page 2 is converted to another and documents and Page Three is converted into another n documents okay I just fix the typo so this is the second document so that's 2A to 2N now when I'm going to show you the code you're going to see something called chunk size so this is basically how many parts or documents we create and each document is going to be a specific then that is going to be defined by the chunk size next we can we need to convert our documents into embeddings now you may ask what is an embedding now in order to explain the concept of embedding suppose that each of the documents that we just created has a chunk size of 1000 tokens now embeddings are used to compress this into smaller vectors containing numerical values so by using embeddings you might be able to convert these thousand tokens into much smaller vectors for example it's going to be float numbers so it might look like something like probably this right and let's say using an embedding Vector we might be able to reduce it into a vector of size three this is just an example and the beauty of embed exists that comparison between different documents until you're creating is actually a lot easier in the embedding space than the original uh text or token space now here you see that we converted each one of the documents into and it is corresponding embeddings right and then we simply store all these embeddings into a semantic index and that becomes our knowledge base so this is basically all the knowledge that you have on your website in this specific case we are going to be looking at phi's index store the five is a vector store created by a Facebook research it is a library for efficient similarity search and clusting of dense factors now 5 is not the only option there are some other options for example Pinecone chroma DB right so these are different Vector stores and depending on your need you can select any one of them now so far we only talked about how you create a knowledge base from your website now one thing to remember is if you add a new website for example if you have a Blog and you add another blog post right all you need to do is simply compute uh embeddings for in this new blog post and append it to your knowledge base you don't really need to do uh redo it for everything else okay so we know how to create a knowledge base but what happens if a user is interacting with a chatbot right so this is these are different steps that we're going to be following in this case what happens is if a user is interacting with the chatbot the user asks a question then that question will be sent to the embedding API and depending on what API embedding API you used for example you can use open AIS embeddings or some other embeddings for example gpd4 or llama Index right the same embeddings are going to be used to embed your question of query now let's assume that turned out to be uh this is probably let's say this is the embedding that you get from your question and these are the embeddings that you have in your knowledge base based on the uh documents that you have right now next step is to do a semantic search or similarity search so what you want to do is you basically are going to take the embeddings of your question and compare it uh with the embeddings of the documents that you have already stored in your knowledge base right so whatever similarity metric that you use you will get different ranking for example you want to only get the top four uh similar documents right so it will simply return you the embeddings of those four documents since we know the indices of the corresponding documents based on the embeddings you can actually retrieve the actual documents right and those documents basically becomes your context now this is essentially the text on which you want to perform your query right so next what's going to happen is you would take the documents that is your context right and you simply take the original question and feed these two information into your large language model and you will get an answer so the large language model is going to look at the documents that the similarities search found to be similar to the question based on the embeddings right so use that as a context to generate answer for your question right and as you use it you get the response back now just to reiterate there are two main components of our chatbot first is the embedding so that is simply used for finding the most similar documents and the second component is the llm for generating the response in natural language now depending on your application you can actually choose any other type of embedding that you want and the llm can be any llm it can be based on open AIS large language models or it could be one of the open source large language models you can actually mix and match them as well so for example you can use open AIS embeddings because let's say they are really good for document retrieval but you can use another large language model let's assume something like GPT for all for generating the natural language response so I hope this explanation is clear and it explains the theoretical Concepts next we are going to look at the code implementation for this chatbot now let's look at the code base of how we can implement this I'm running this on a local machine in this specific example I'm going to be using open AI both for the Computing the embeddings as well as for the large language model however I have a detailed video on how to do this in using open source tools I'm going to put a link to that video as well as I'm going to be creating a completely new video and using GPD for all as your language model and and this is as embeddings so keep an eye out for that video in the first block we are simply installing the required packages uh in this case I'm using using the files Vector store so the CPU Version 9 chain and open AI next you want to set your open air key you can find this in your own account all right now in order to get your opener API key simply go to your account there is this section of API Keys same then simply click create a new API key give it a name then create the API key and just copy that key next you want to get a list of URLs right if you have your own website you can use the website site map to get a list of all the URLs in this case I'm manually providing a list of three different URLs uh this is just very test purposes so I don't really want it to go and scrap all the pages and I have website I went to the announcement of three in large language models so uh one is the MPT 7B the other one is the stable LM and the last one is uh Kunia right so actually went to these Pages Copy those links here this is going to be essentially the database that I want to uh interact with for example here is the blog post that was released when they were announcing wukunia right and similarly there are blog posts were Mosaic ml nkp 7B models as well as for Steven M now the reason I chose different website was I want to also show you how you can uh retrieve the source of information from your model next you want to read data from these URLs right so for that we are going to be using the unstructured url url loader from document loaders in Lang chain right so what you need to do is you simply need to pass the list of URLs uh to your unstructured url loader and then call the load function on your loader this will give you all the data comprised uh in your URLs right so for example here I looked at it and these are essentially three different web pages right so for each web page you have a different uh loader and this shows like the whole text contained in each one of these web pages based on our architecture diagram so far we are just on this step two right so assuming you have a site a map or like a list of URLs we just got here next we need to divide each of the these web pages into smaller documents so that we can feed this into our large language model as well as the embedding computation model so for that we're going to be using the character text Splitter from text splitter in blank chain right now what is going on here and I want to explain the design choices here so first and foremost we want to divide each one of the documents or web pages into a token size of thousand now roughly speaking think of a token as a word so first thousand words into one uh chunk then we are defining an overlap of 200 tokens now there are cases in which overlap is very important to have for example if you are conveying an idea in multiple sentences the LA and the last sentence is dependent on let's say the previous sentences right if you simply do a normal chunking without overlap and the last couple of sentences end up in the separate document then in those that case it may not be conveying the information correctly that's why you need to have an overlap so that there is continuity of information so in cases where you have temporal dependence of sentences or sequences you want to use an overlap in the cases in which there is no temporal dependence you can ignore overlaps I like to keep it to the around 10 or 20 percent now I hope this is clear now the last design choice is this separator which where we're using new lines so the chunk will end at a new line and it will be divided based on the next sentence then what we do is we take our data so all these three documents right and we run it through this uh character text split all right so as a result we are going to get and number of documents right so you see here uh instead of just three we actually will get a total of uh let me see how many documents do we have okay so now we have around 62 documents and this is what you get right so we Define like n documents now each one of the page is not going to give you the exactly same number of documents because uh different pages may context contain different amount of text okay now if you have seen my previous videos on information retrieval the rest of the process is very similar but I want to go into more details of the models themselves right so so the next step is to get embeddings and for this case we are using embeddings from open AI right so even for embeddings there are a number of different models available within open AI so for this specific case we're using the default one which is the uh text embedding add a zero zero two using that API we simply compute embeddings for each of the documents that we created right but next we need to store them in a vector store so for that purpose we are going to be using the Feist Vector store so what we do is we take our documents and our embedding model and based on that we're going to compute all the embeddings and store them in the vector store I usually write them to the disk so that I don't have to recompute these embeddics again and again because there is an Associated cost with it right so all you need to do is just dump the embeddings on your hard disk and then whenever you need those embeddings you simply read them from the disk rather than Computing them again now the next step is to do information retrieval so that's this whole step of somebody asking a question from your chatbot and interacting with your website so in order to do that we're going to be using a length chains retrieval question answer with sources chain now I'm using this specific chain because apart from the answer I also want to see where the answer is coming from now in order to generate the responses in natural language we need a language Model A large language model in this case we are using a default model from open AI so let's look at this the default model is uh the text DaVinci zero zero three now you can also look at some of the parameters we set the temperature the maximum tokens is 256 right uh but we can actually change the default models so let me show you how you can do that so if you go to open as website there are a number of models that you can use right so even you have the chance gpto 3. GPT 3.5 turbo right now in order to change the model you will need to pass another parameter to this function called Model name right and then you simply Define what type of model uh you want to use I'm going to keep it to default but just wanted to show you this is where you can simply change the model name okay so we have our model we have our Vector store now we need to create the chain so in order to create this retrieval change change uh we're going to be calling this function from llm right so that's a large language model then you pass the large diverter model that you're using again it doesn't have to be open AI it could be any other model and in the subsequent radio I'm going to show you how to do this whole process using absolutely open source models running on your local machine without hugging face so hugging face is another option which you can use but I'm going to show you a completely different process I already have a video on hugging face I'm going to put a link to that here okay so we pass on the language model and our Vector store right this and we create a chain now next when we pass a question to a chain right so for example here if I pass a question it will take the question compute the corresponding embedding then run a semantic search on it right retrieve the documents which are closest to this question and use those documents as a context and then use the open AI llm that we choose to answer our question based on the documents that it found to be the most relevant and give us an answer so for example in this case I ran this query how big is stable and the response is the stable element is available in 3 billion seven billion parameter with 15 to 65 billion parameter models to follow right so this is the response and then it shows the source as well where it got this information from so in this case it's looking at the blog post from stability AI so it's one of the link that I provided the next one that I asked was how good is wacuna right so in this case the responses vocuni is capable is capable of generating detailed and well-structured answers with high quality on par with chat GPT outperforming other models like lava and Stanford's alpaca in more than 90 percent of the cases however it has certain limitation such as not being good at task involving reasoning and Mathematics right so this is the response and then it also lists the source so this is the second block post right and then I asked a question regarding uh mpt-7b model tray and the answer was I think it was like which one is the best right so the answer is the best MPT 7B model is the instruct model which was trained on one trillion tokens it's very subjective because there are three different uh MPT 7B models right but again it lists the source so this is how you can actually use uh your own web site with open AI to create custom chat boards I wanted to create this very detailed video to walk you guys through a step-by-step process both in terms of how this architecture works as well as the design choices I will be creating a lot more content uh using LinkedIn and large language models if you like these kind of tutorials let me know in the comment section below okay I think one last thing everybody would be interested is in the usage so for this uh specific API key I have a limit of five dollars I have been playing around with the Mario this model for um quite a while so like in order to prepare this tutorial I had to run this multiple times and it has cost me at around 33 cents so far so it's not very significant however uh this was just experimentation from my end right if you are serving this model on your website and let's say you have a lot of traffic this is the cost could be significant so I think it would be helpful if I can actually show you the pricing so um I don't personally have access to a gpt4 API so here are the pricing and this one is pretty expensive so per thousand tokens uh you're looking at point zero six uh dollars right or six cents per thousand tokens but if you're using uh the turbo version to 10 GPT so you're looking at point zero zero two dollars uh per thousand tokens I think we were using the DaVinci model right so that's point zero two I actually should have gone with uh here probably uh for some reason I thought it's less expensive anyways if you are uh doing question and answer and just use uh the chat jpk model it's a lot less expensive than the DaVinci model that we were using now for computing and bandings of the your data set so uh the current uh pricing is here that's that's actually not bad at all right so what I would recommend is you can potentially use embeddings from open Ai and then you can run your own large language model such as uh GPD for all or even Kunia right to create I mean natural language responses so you might be able to uh reduce these costs I hope uh this video is helpful now if you have a similar project now anything related to large language models and would like an expert opinion you can actually reach out to me through my email as always if you have any questions put them in the comments I'll try my best to respond to them if you want guys want me to create content on a specific topic uh also don't forget to reach out I would love to do that hope you like this video thanks for watching see you in the next one
|
Build Your Own Auto-GPT Apps with LangChain (Python Tutorial)
|
2023-04-21 00:00:00
|
Dave Ebbelaar
|
I really believe this is one of the best opportunities for data scientists and AI Engineers right now in this video I will give you an introduction to the Lang chain Library using python langchain is a framework for developing applications using large language models I will walk you through all the modules then the quick start guide and then finally we will create our first app which will be a personal assistant that can answer questions about any YouTube video that you provided with so what is it it's a framework for developing applications powered by large language models like open ai's GPT models and instead of just interacting with these models using an API so basically you ask a question for example this is what you do when you interact with chat GPT but in the background it's just an API that you send a message to and then get the message back that's how you normally interact with large language models Lang chain is a framework around that that also allows your application to become data aware and agentic so data aware means that you can connect a language model to other data sources so for example your own data company data that you can build on and agentic means allow a language model to interact with its environment so it's not just asking a question and then getting information back now it's also acting on that information by using various tools for example that we will get into in a bit and now why would you want to learn a framework like language and I really want to get deep into this because I believe there will be so many opportunities if you understand this correctly so I work as a freelance data scientist and up until this point basically my job as a data scientist is to help companies usually larger companies that have a lot of historical data and use that to train machine learning models on but what we're seeing right now with these pre-trained large language models like open AIS models is that also smaller companies without huge amounts of historical data can start to Leverage The Power of AI and now me working as a freelancer this provides me with a lot of opportunities actually to also work with smaller businesses doing smaller projects while also still being able to make great impact for that company and also with really large machine learning projects using lots of historical data you also never really quite know what you're going to get so actually a lot of data science projects fail and I believe using these large language models for small businesses or even for large businesses will be a much more predictable way of doing AI projects because the model is already there you know what it can do and now you just have to provide it with extra information and tune it to a specific use case so if you learn this if you understand Lang chain and more specifically the underlying principles basically of this particular framework then I think you will set yourself up for many great opportunities to come you can really make a lot of money here if you understand this correctly so let's get into this so I will start off by explaining all all the different modules to you all the different building blocks of the Lang chain library that you can use to start building your intelligent apps and after briefly explaining each of the core components I will give you an example from the quick start guide within vs code so you also have an idea of what it looks like in code and how you can use it and there is also a GitHub page available for this project that you can go to link is in the description so you can clone it and you can follow along here I also explain how to set this up and what kind of API Keys you need and how to set up the environment and install the keys in your Dot and file so if you're not familiar with that I would suggest checking out this GitHub page so that way you can follow along but coming back to the getting started page over here so these are all the modules in increasing order of complexity so we will start simple and we will start off with the models so these are the model Integrations that Lang chain supports and there is a whole list over here that you can check out but you have the models from open AI you have for example hugging face and a whole lot of other different models that are supported right now so that is the first module so now let's see what it looks like in vs code so I have an example over here where I load the open AI model from the langchain library and I can basically Define my model by providing a specific parameter here for the model name so for this example we are going to use the text DaVinci 3 model and if you go to the API reference for open AI you can see there are a lot of models that you can pick a lot of models that you can choose from I am currently on the wait list for gpt4 so once you get access to that it will become even better but coming back to the example over here so we load our model and then we can basically provide it with a prompt so let's say right a poem about Python and AI so let's first initialize the model then say store our prompt and now we are going to call the model and put in the prompt so it will now send out an connection to the open AI API with our prompt and then it will give us back the result so this is just a general way of interacting with these large language models and something that I can also do in chat GPT so here you can see the poem that we get back from the API so this is nothing new up until this point but this is the starting point that we need in order to interact with these language models then next on the list is prompts and this you can use to manage your prompts optimize them and also serialize them so coming back to our project we have the prompts template over here that we can also import from the langchain library and the prompt template what we can do we can provide it with an input variable and then a template so what we can do with this is we can basically ask user information or get some kind of variable information and then put it into a prompt similar to how you would use fstrings for example in Python and this is just a nice class that you can use and there are more things you can do with it but this is just a basic example so we can provide the prompt template over here let me clear this up for you so what is a good name for a company that makes and then between curly brackets product then we see the input variables is product over here and now we can call prompt.format and then we can provide the product so after running this what you can see is that we now have the prompt what is a good name for a company that makes Smart apps using large language models and then the third component is memory so we can provide our intelligent app with both long term and short-term memory to make it smarter basically so it does not forget the previous interaction that it has had with the user so coming back to our example over here we can import the conversation chain so that is also from link chain import conversation chain so how this works is we can initialize a model again and then start a conversation and then we are going to call the dot predict method on the conversation and provide it with an input so right now the conversation is empty but we can send this over and predict it and what you can then see is that we will have a conversation so so there is a general prompt here so the following is a friendly conversation between a human and an AI the AI is talkative and provides lots of specific details from its context Etc so this is already engineered within the library and then the human says hi there and then the AI provides us with a response and that is the output so we can print that and that is hi there nice to meet you what can I do for you and now what we can do next is we have that output and we are going to make another prediction by saying I'm doing well just having a conversation with an AI so let's run this here you can see the history so first we have to hi there then we have the response from the AI and then you see R response here again so what we've just entered and now we can print that again and you can see that now the AI is responding by it's great to be having a conversation with you what would you like to talk about Alright and then next up is indexes so language models are often more powerful when combined with your own text Data this module covers best practices for doing exactly that so this is where it gets really exciting so this was the example that I was talking about previously where you can build smart applications for companies using their own data their existing data and we will get more into this in the example that I will provide at the end of this video but for now just know that there are document loaders text Splitters and Vector stores and also retrievers so this is really exciting when we start to work with our own data but for now let's continue to chains which is another core component of the Lang chain model so chains go beyond just a single large language model call and are sequences of calls language provides a standard interface for change lots of Integrations with other tools and end-to-end chains for common applications so this is really where we start to bring things together so the models and the prompts and the memory it's nothing that new right we've seen it we can use it in chat GPT but now when we start to chain things together together is when it gets really exciting so what does this look like in code so let's look at the llm chain class that we can import from langchain dot chains so given our previous model setup and the prompt that we've provided so coming up with a company name we can now actually start to run this chain so the prompt template was just for engineering your prompt the model is just for making a connection with the API and now we can change this together so let's quickly store this then set up this chain so we provide the model and the prompt as input parameters and now we can run this so let's try another example what is a good name for a company that makes AI chatbots for dental offices AI dentek love it alright so now you start to get a sense of how you can turn this into an application you pre-define the prompts over here and then you combine it with user input and run that using a chain so you could already turn this into a web app for example company name generator dot AI this is it basically and now the trick here the key is being really smart about what you put into these templates so this is a very straightforward example what is a good name for a company but you can get really specific here and provide it with lots of information really tailored to a specific use case to get the result that you are looking for given the user's input and I will give you a good example of this once we start to develop the YouTube AI assistant later in this video and then the last component agents so agents involve and large language model making decisions about which actions to take taking that action seeing an observation and repeating that until it's done so this is really where you get to build your own Auto GPT baby AGI kind of applications by using these agents and these agents can use tools so there are tools agents toolkits and executors and tools for example we have all kinds of tools that are already supported straight out of the box so we have have Google searches we have Wikipedia we have Google searches for you the serp API all kinds of stuff that we can use and if we use these agents they will use the large language model so for example the GPT model to assess which tool to use and then use the tool to get the information and then provide it back to the large language model there is even upon those data frame agent that you can use and it's mostly optimized for question answering so here you can see a quick example and you can basically ask it how many rows are there and then it knows that it can interact with the pondless data frame called The Lang function to get the length of the data frame and then provide that as a result so let's look at another example from the quick start guide so if I want to start using agents what I can do is I can import the initialize agent agent type and the load tools to also provide it with some tools and then coming back over here I can first list all the tools so these are also on on the website that was just showing you in the documentation but here you can see the specific name that you have to use in order to provide the agent with that tool and now let's say for example we want to create an agent and give it access to Wikipedia and it should be able to do some math we can set up the tools like this then initialize the agent provided with the tools the model that is defined over here and then the agent type is zero shot reacts description which basically means that based on the prompt that we give to the agent it will pick the best tool to solve the problem so it will basically pick the tool on its own and this is where it gets really interesting because now you can provide an agent with a set of tools and then it will figure out on its own which tool to use to come up with the best answer so let's try this query over here and what year was python released and who's the original Creator multiply the Year by three and we only give it access to Wikipedia and math alright so now let's first run this so let's see what it will do so new executor okay so it understands that it needs the action Wikipedia and then you can see the input Python programming language so it understands that that is the query that you have to search for in Wikipedia then it will get the history of python summary alright so I have enough information to answer the question so the final answer python was created in 1991 by hitoforosum and the year multiplied by 3 is 5763 all right so this is really awesome right and this is beyond what chat GPT or the GPT models are capable of because we can get live information from the internet and then also the results are stored as well so here you can just see the plain text string we now have that available and now if we start to combine everything together so multiple chains multiple prompts and then use agents to get information also use memory to store everything now we can actually build some really cool stuff alright so I'm now going to show you how you can create an assistant that can answer questions about the specific YouTube video so coming back to the indexes I've previously explained how these large language models become really powerful when you combine them with your own data and your own data in this scenario in this use case will be a YouTube transcript that we are going to download automatically but you can basically replace that transcript with any other information and this approach will still work so the langchain library has document loaders text Splitters and Vector stores and we are going to use all of these so let's first talk about document loaders and these these are basically Little Helper tools basically that make it easy to load certain documents and here you can see everything that is supported right now so we have things like Discord we have figma we have git we have notion we have obsidian PDFs PowerPoints but also YouTube so let's first see how we can get the YouTube transcript given a video URL using this document loader alright so coming back to vs code we have the following video URL over here which is a podcast from The Lex Friedman podcast where he talks to Sam Altman the CEO of open Ai and I thought this would be a nice video to use as an example so we are going to first read the transcript of this podcast this two and a half hour long video using the document loader so for that we're going to import first the YouTube loader from document loaders and we're going to input the video URL so let's run this and see what we get so now we have the loader and to get the transcript then we call the loader dot load method so we call this method over here and then run that and then that will run for a while and now we can have a look at the transcript over here which is basically a very long string over here with all the text in here alright so now we have the fill transcript and it is within a list and we can access that using the page content to get the actual string the actual transcript but now we have the following problem if I run this to see how long the transcript is we can see how many tokens there are in there so this is the total amount of characters and we can see that it's over one hundred thousand and now this was really a aha moment for me because you cannot just provide this full transcript with over 100 000 characters to the API of these large language models it's just too large so if you want the model to be able to answer questions about this transcript we have to find a workaround to provide it with the information it needs to answer the questions without sending the transcript in full and that is where the text Splitters come in because if you go to the API documentation you can see the max tokens over here per model for open AI models and if you use the latest model that I can use right now so the GPT 3.5 turbo is 4096 tokens that you can input to the API if you're already on gpt4 you can basically increase the token size but for now we're stuck to around 4 000 tokens so how do we deal with that if we have a transcript of over 100 000 tokens we can use the text splitter to first split this up in several chunks so what this basically will do is I say hey we have this transcript over here this size but I want to split it up in chunk sizes of 1000 character each and here you can also specify if you want there to be a bit of overlap and if I run this so let's run the text splitter and then sorry first Define the text splitter and then call the text splitter split documents and then put in the transcript that we've just created so that is uh the object over here the list with the document and the page content in here and if I run that so let's do that and now I now have the docs and what we can now do if we have a look at what dox is we can now see that is just a list with a bunch of split up documents so it has taken the very large transcript over 100 000 tokens and split it up into chunks of one thousand okay so that is the first step okay so now you might wonder Okay so we've split up the transcript but we can still not provide it to the API right correct and that is where the next part comes in and that is embeddings and Factor databases so this is quite Technical and I won't go go into the details in this video I will make future videos about this because for now I want to give you a brief demonstration and overview of how to use this and then later we can get more specific but first we use the embeddings from open AI to basically convert the text the splits that we have just created of the Thousand tokens long to convert them into factors and factors are basically a in this case a numerical representation of the text itself so we convert the text to a vector of numbers then we will use the phase Library which is an Library developed by Facebook that you can use for efficient similarity search we will combine that to basically create a database of all these documents that you see over here and when a user wants to ask a question with regards to this YouTube transcript we will first perform a similarity search to find the chunks that are more most similar to the prompts that the user is asking so what this means is that we have this database with all these factors and we can do a similarity search on that to find the relevant pieces of information that we need and now this is the critical key to working with these large language models and your own data first create a filter a lookup table of some sort to get just the information that you want and then provide that to the large language model with your question so if we bring all of that together in this function create DB from YouTube video URL we can for any given video URL load the transcript then split it up into chunks of 1000 tokens and then put it into a vector database object that we can return using this function now what we can then do next is we can provide this to another function the get response from query where we use this database that we've just created to answer specific questions so how does this work well we provide the database and the query so the question you want to ask about the video to this function get response from query and then we also have a parameter K over here which defaults to four and here you can see the reasoning behind it that is basically to maximize the amount of tokens that we sent to the API and then this is where it gets really interesting is we perform a similarity search on the database using the query and we return K documents so given our question it will go through all of these documents and it will find the most similar ones so it will do a similarity search and then what we do once we have all the documents so for by default We join them into one single string and then we create a model over here and now we use the GPT three and a half turbo model and next you define a template for your prompt like we've seen earlier in this video and this is where you can get really creative so in this example you are a helpful assistant that can answer questions about YouTube videos based on the video transcript and then we provide the input parameter docs which we will replace by the string that we've just created so all of the document information only use factual information from the transcript to answer the question if you feel like you don't have enough information to answer the question say I don't know your answer should be for both and details so like I've said this is really where you can get creative and based on the kind of applications that you want to create design your template over here and basically by creating minor changes within this template you can create entirely different apps for all kinds of Industries Alright and then the next step is to chain all of this together and since we are now using the chat function using the GPT 3.5 turbo model this is slightly different but you can find everything in the quick start so first it explains how you can use the general models and then it continues with the chat models so the syntax is a little different because here we have the system message prompt and a human message Pro so this is nice to First Define a message a problem for the system basically so that is the description over here the template explaining the AI the agent basically what it should do and then we have a prompt to alter the question or the input that the human is providing so I for example added answer the following question and then put in the question over here so I'm not sure if this is necessary right now but you can alter the input from the user as well so that is how you would do it and then it combines all of that into a chat prompt and then like we've seen earlier we can put that into a chain the chat and the prompt and then we can run that chain again also like we've seen before and then we just put in a query and the docs that we have defined earlier alright so now we have all the building blocks that we need and we can actually start to call these functions so again let's define the video URL and let's first create a database from this video so let's see what that will do so it goes quite quickly so it gets the transcript and then converts it so now we have the database object and now we can fill in a query over here and then call the get response from query function to answer a specific question about this video transcript so let's actually see what they are talking about and let's say I don't have time to watch all of this but I'm pretty interested in what they have to say about AGI over here so I can come over here and listen to what they have to say but I can now also come to this application over here or this function so to say and then fill in what are they saying about AGI so that is the query and now let's get the response and let's print it so there we go in the videos transcript they are discussing AGI artificial general intelligence and the work being done by open AI to develop it Sam Altman the CEO and so on so it's answering the question based on the transcript awesome so let's ask it another question who are are the hosts of this podcast so let's run it all at once so it will do some thinking first get the response and then based on the transcript is not clear who the host of the podcasts are however it is mentioned that the podcast features conversations with guests such as Sam Altman Jordan Peterson and is hosted by someone named Lex Friedman okay so this is really interesting it is admitting that it doesn't have all the information but it is recognizing all the entities and it is correct it's a podcast by Lex Friedman alright so let's try another one what are they saying about Microsoft in the transcript the speakers are discussing their partnership with Microsoft and how they have been amazing partner to them alright awesome and now also this function get response from query not only Returns the response but also the docs so it's actually quite cool we can also have a look at the documents for in this case that it's using to get this answer so for this you also get the reference to the original content which is very convenient if you want to do additional research or fact check your models to see if it's actually giving you answers that are correct alright so now we basically have a working app and all you have to do is create a simple web page around this post it on a server or web app somewhere and people can interact with this so fill in YouTube url ask questions and it will do that for you and really when I look at all of this stuff my hat really starts to spin I have so many ideas because for example what you can do with this approach alone let's say you create a list of all the channels that talk about a specific topic so for example you want to stay up to date on AI you list all the podcast Channel all the popular channels and then you create a little script that every now and then checks if they have a new video public on their page scrapes all the URLs and then process all of those videos with these functions and then really engineer your prompt in such a way that you can extract useful information from that that you can use for example to do research or create a social media account account for example a Twitter account where you tweet about the latest updates in AI or even a YouTube channel where you want to talk about AI you can really Scout everything and then ask okay what is the Lex frequent postcard saying about AGI what is Joe Rogan saying about AGI and you can do that all automatically and then you can combine this chaining it together with different agents to store this information in files on your you can see you can really the possibilities are endless of so like I've said I am really going to dive deep into this because there are so many opportunities right now and as I will learn I will keep you guys up to date on my YouTube channel so if you're interested in this make sure to subscribe so you don't miss any future videos on this and really it's been amazing how many requests I'm already getting from companies to help them Implement these tools help them with AI I've been getting tons of messages so it's really exciting so for me as a freelancer this is a really exciting opportunity a really exciting moment to basically also start to work with smaller clients smaller companies and Implement these tools and now if you also feel like you want to do more with this you want to exploit this opportunity and start to work on your own freelance projects but don't really know where to start then you should really check out data freelancer which is a mastermind that I host specifically created for data professionals that want to Kickstart launch their freelance career in data but don't really know where to start so in this Mastermind you will literally learn everything you need to get started to start Landing your first paid project I share all my systems and models that I've developed over the years to basically systemize freelancing in data to make sure you never run out of clients and you will become part of a community of other data professionals that are working on their freelancing career and we are all here together to work on the same goals of making more money working on fun projects and creating Freedom that's what we're trying to do here feels like hanging out with friends but with real business results so if you consider freelancing or want to take advantage of all the amazing opportunities that are out there right now in the world of AI but don't really know where to start then check out data freelancer first link in the description and sign up for the wait list foreign foreign
|
Chat with Multiple PDFs | LangChain App Tutorial in Python (Free LLMs and Embeddings)
|
2023-05-29 00:00:00
|
Alejandro AO - Software & Ai
|
good morning everyone how's it going today welcome to this new video tutorial in which I'm going to show you exactly how to build this application that you see right here it is a chatbot that allows you to chat with multiple PDFs from your computer at once okay let me show you how it works and for this example I'm going to be uploading the Constitution and the Bill of Rights then when I click on process it's going to embed them and put them into my database right here now I can start asking questions about this so for example what are the three branches of the United States government that is a question related to the Constitution but then I can ask a question also about for example the First Amendment which is in the Bill of Rights and it's also going to be able to answer me it only answers questions that are related to the PDF documents that you upload so it really is only about the information that you provided I'm going to be showing you how to build this not only using openai but also using hugging face free models so that you don't break your wallet while trying to learn how to do this um yeah it's a little bit more complex than the previous projects that I had shared in this channel But be sure to follow to the end I'm sure that the result is worth the effort I hope you enjoy it and if you like videos like this don't forget to subscribe foreign [Music] real quick I'm just going to guide you through I have already created my virtual environment right here and which is where all of my dependencies are going to be installed as you can see I am using python 3.9 for this one we're going to be using that our DOT EnV file to store our secrets git ignore which is the file that tells git to ignore these two files so that our secrets and our local configuration are not tracked on git here is just my document of my python version and this is app.py which which is where all of the action is going to be taking place the first thing that we're going to want to do is to install the dependencies that we're going to need in order to do that you can do pip install and the dependencies that we're going to need are streamlit first of all to load to create the graphical user interface we're going to be using pi PDF two like that in order to read our PDFs we're going to need line chain to interact with our language models we're going to be using python dot EnV in order to load our secrets from tnv we're also going to be using files CPU as our Vector store we're going to be using open Ai and hugging face Hub because I'm going to be showing you how to do this both with open AI models and with hug and face models okay so once you do that you hit enter and it's probably going to take way longer for you because I already have this installed but yeah now we can actually start coding now that we our environment is completely set up all right so now let's start with the graphical user interface okay um first of all I'm going to first make this test real quick right here um this basically just tests that the application is actually being executed and not being imported so whatever is inside this condition right here is only going to be executed if the application is being executed directly not if it's imported that's kind of a manual test that you usually do and then you create your function right here and whatever is inside of this function is what is going to be run in the application so if I say here for example print hello world and I run it you will see that I see Hello World right here um because I am executing it um there you go so now let's start with the graphical user interface as I mentioned before we're going to be using streamlit for the graphical user interface and to do that we're going to start by importing streamlit that way previously installed so we do import streamlit as St as we do and then the first thing that I want to do is to set the page configuration okay so you do St set page configuration I'm just going to pass in two parameters but you can pass as many as you want from here and I'm going to set the page title to um chat with multiple multiple PDFs like that and I'm also going to pass in a page icon like that and I'm going to set it to the Emoji of books um also here I'm going to add a header why not SD header like that and I'm just going to say like this is going to be the main header of the application going to say that it's going to be also chat with multiple PDFs like that um like that and also it's going to add the Emoji right here of some books and also remember that below the header we wanted that to be a a text input where the user was going to be able to add their questions so let's add that St text input like that and whatever you put inside of here is the label from the input so you can say ask question ask a ask a question about your documents here like that that's going to be like the label that's going to appear on top of the user on top of the text input and then something else that we want to do is we wanted to add a sidebar where the user is going to be able to upload their PDF documents so in order to do that you do St sidebar and if you want to put things inside it you have to do with SD sidebar and then colon and then whatever you put inside of here is going to be inside of your sidebar okay watch out do not add parentheses here because otherwise that is not going to run you will have to pass in some parameters right here and you don't need that so just leave it like that and here just write the contents of your sidebar in our case we're going to be adding a sub header that reads your documents and right here we're going to add another streamlined component element that allows you to upload files right here and this one is called SD file loader there you go and just as with the text input inside of the parenthesis you just add the label okay label as you can see and my label is going to be upload your PDF upload your PDFs here and sorry about the ambulance upload your PDFs here and press on and click on process like that there you go um I suppose that's pretty much good let's just add a button SD button and the button is going to be process like that now if I click on Save and remember how you run your streamlit application you don't do python python app because that is not going to work you have to run it by with using extremely so you have to do extremely broad app.py so you do streamlit run and then the name of your file which is in my case at the py and now the app is running and as you can see I have my graphical user interface right here and up something happened here there we go I have my user interface right here and it seems to be working correctly now here I can ask questions and all here I can upload files in this case the files I want to upload are the Bill of Rights and the Constitution but so far it's just a graphical user interface there's nothing happening behind so yeah let's add some logic to it all right right so what we're going to want to do is to create our API keys and because we're going to be using some Services by openai and by hug and face so we're going to be able to connect to their apis and in order to do that we're going to be needing their API Keys the API keys we're going to be storing them inside of our DOT EnV file because that is the place where you put things that are supposed to be secret and they're not supposed to be tracked by git so whatever you put in here is not going to be tracked on GitHub when you upload your repository to GitHub okay that's the way to keep your secrets away from the public and the way to do this is we're going to create two variables right here one is open AI API key and the other one is hugging face Hub API token and let's go create them in order to create it at open AI you have to go to platform.openai.com create an account and then go to account API keys and then here you just create new API key I'm just going to call it PDFs you created and then you copy it and then you come right here and paste it here same for hugging face you're going to go to huggingface.co settings tokens and then you can create a new token I'm going to say that this one is also PDFs I think really smart and enough but I'm just going to give it right in case I want to use it again I copy it and then I paste it right here there you go um so now that I have my api's key set I need to be able to access them from here in order to be able to access them I have to first use this other package from python that we installed before and this package is called load load environment I think so you say from dot EnV file [Music] we're going to import load.tnv there you go and this is the function that you're going to run inside of Main in order to enable your application to use your variables inside of dnv so here let's just run load.tnv and now Lang chain is going to be able to access all of our API keys right here okay that's why I mean since we're going to be using launching remember that we have to name our variables exactly like this if you were dealing with your own framework you can name the variables as you want but since Lang chain things where you're going to be using language enhance it's a very specific way to name the API key variables so just remember that you have to name it like this and then just include.load.tnv right here and there you go now we click save and now our API keys are set and we can start dealing with the rest of the logic right here all right so what I'm going to do now is I'm going to show you how this and the logic of this application works um if you have already watched the previous video on PDFs and how to chat with a PDF this is going to sound very familiar to you if you haven't be sure to watch it because that's a much more thorough and detailed explanation of how this application works but in case you just want like a real quick refresher I'm going to cover it real quick right now but if you do want a detailed description with examples and everything of how this process works take a look at that video because it's going to make it easier for you to understand what is going on so um just like real quick what we're going to be doing is we're going to be taking the PDFs from our user we're going to be taking several as many as as he wants and this PDFs we're going to divide them into pieces of text okay so we're going to read all of the text from our PDFs we're going to have then just a huge string of text and that's that string of text we're going to divide it into smaller pieces and chunks of texts okay it is those chunks that we're going to later convert into embeddings now what are embeddings embeddings are you can think of that in a very simple way as a vector representation of your text or a number representation of your text and something very important about this string of numbers of the of this list of numbers that represents your text is that this list of numbers also contains information about the meaning of your text okay this means that we can potentially find similar text that has similar meaning to your text just by seeing their number representation and that's exactly what we're going to be doing later so once we have the vector representation of each of your chunks of text we're going to be able to store all of those embeddings um or your vector representations into a vector store or a knowledge base this is basically just your database of all your or folio or vector representations and this database it can be Pinecone it can be chroma it can be files in our case we're going to be using files but Pinecone is just like the most popular one so I just added this logo right here to show to help you see what's going on foreign but now that we have our database right here we're going to be able to take this um we're going to be able to take questions from our user the user is going to ask a question like for example what is a neural network or something like that and then the question we're going to embed it using the same algorithm as we did with the embeddings in our chunks of text that is going to allow us to find inside of this database the vectors or the vector representations that are similar from all of these chunks of text we're going to find the the ones that are similar or have similar meaning or semantic context to our question that the per that our user asked that way this is going to give us our ranked results of the chunks of text that are relevant to the question that our user asked and we're going to be able to send all of those um send that as context to our language model so actually the language model doesn't really know what the PDFs have what we're actually doing is the language model is already trained our language model can be can come from hugging phase or it can come from open AI or something like that but the language model doesn't really know what our model what our PDF documents have what we're doing is we're finding the chunks of text that are relevant to our users question finding them and ranking them in order of importance and then sending them as context to our language model so behind the scenes The Prompt is going to look something like based on the following pieces of text of chunks of text answer the following question and then we're going to pass in the chunks of text selected by our Vector store and we're going to ask the question and then the language model is going to be able to answer the question depending on on the context that we gave it and then it's going to be able to give us an answer and that answer is going to be sent to our user and that's actually what is happening behind the scenes and langching makes it extremely easy to do all of this with just a few commands so let me show you how that works all right let's do that all right so what we're going to do now is we're going to be dealing with the with the sidebar right here uh remember that we have our we have our document drag and drop here but so far it only takes one file as you can see only one file allowed so we're going to enable multiple files and we're also going to be dealing with what to do when the user clicks on the process okay so let's do that in order to take more than one file we're going to come right here to our sidebar and to our file uploader and there's actually a very convenient parameter called accept multiple files we're just going to set it to True like that there you go and we're going to store the contents of this file upload into a variable called PDF Docs like this and now what we're going to want to do is we're going to want to do something whenever the user clicks on the button so in order to do that we just have to add an if before the button that way the button will become true only when the user clicks on it and then we're going to have to actually start processing information right here and inside this button what we're going to do is three things remember the first of all is we're going to oops we're going to get the PDF text to get just the raw contents of the PDFs of all the PDFs then we're going to get the text chunks get the text chunks which is this part right here to divide it and then we're going to get the vector store I'm going to create our Vector store with the embeddings Okay so create Vector store there you go we're going to be building these three different functions in a moment but before that I'm just going to show you that actually something very useful to do when you're dealing with these kinds of processes especially when you're dealing with streamlit is to add a spinner right here so you do St spinner and then you just say processing or something like that and then just like with the sidebar you do with spinner there you go and then just wrap everything inside of it and what this does is that all the contents inside the spinner are going to be um processed while the user sees like a spinning wheel and it just tells the user that the program is actually running and except and processing things and it's not frozen so it's just for to make it more user friendly okay so there you go now we can actually start dealing with these applications with this functions now we're going to get the text from the PDFs right so let's do that all right so now what we're going to do is we're going to take all of the raw text from our PDFs okay in order to do that I'm going to create a new variable called Raw text right here and I'm going to create a new function called get PDF text like this and this function is going to take our PDF documents like that okay um so let's I mean the objective of this function is to take our PDFs documents which is a list of PDF files and it's going to return a single string of text with all of the content all of the text content of these PDFs okay so let's create that function up here um there we go so the function is going to be called like this and actually we're going to be needing a library that we installed before and this library has Pi Pi PDF so we're going to import it from PI pdf2 we're going to be importing um a class called PDF reader all right and you will see how we use it in a moment so inside of this function what I'm going to do is I'm going to first of all initialize the variable which is going to contain all of the raw text of my PDFs and then I'm going to Loop through all of my PDF objects right here and read them so and read them and take the contents of that and append it I mean concatenate it to this variable right here so let's do that now what I want to do is set for PDF in oops in PDF Docs I'm going to want to initialize a PDF reader object foreign here you have to initialize it with the PDF object that you want to that you want to initialize it from and what it does is that it creates a PDF object that has pages and it is actually the pages that you're able to read from so what we're going to do is we're going to loop as well through the pages to actually read each page and add it to the text so now for page in PDF reader um PDF reader dot pages then this page contains the method called extract text like this and this one just extracts all of the raw text from this page of the PDF and we're going to be appending it to our text variable and returning our final text variable in the end like this so let me just repeat real quick what happened right here so we initialized a variable called text which is in which we're going to be storing all of the text from our PDFs and then from each PDF we started looping we started looping through all of our PDFs we initialized one PDF reader object for each PDF and then we looped through all of the pages of each of these PDA of these PDFs and we extract the text from that page and appended it or concatenated it to our text variable right here so in the end what we should get is just a single string with all of the contents from our PDFs inside of this variable called raw text let me show you real quick how this looks so if I do St write and I just write the raw text right here I'm supposed to see I think I stopped the application yeah there you go now what you should see when I upload my documents right here and I click process you should see that it first of all shows a spinner saying processing and then it displays the raw text because that's what we're getting right here so let's do that I'm going to be uploading the Constitution and the Bill of Rights and then if I click on process you will see that now I have all of the text right here and now I am able to actually divide it into chunks of text so let's do that all right so um what we're going to want to do now is to split this huge piece of text into chunks that we're going to be able to feed our model like this okay and if you had seen the previous video you already know what I'm going to do it's actually very simple I'm just going to create a new function right here I'm going to say that I want to get the chunks text chunks like here and this function is going to be called get text chunks like that and right here we're just going to pass in a single string of text and this one is going to return a list of chunks of text that we're going to be able to feed our database Okay so um let's just create this function right here we come up here and we do Define this there you go now in order to divide our text into into chunks or pieces or paragraphs we're going to be using um a library we're going to be using blankching okay so we're going to be using a class from launching called character text splitter so we're going to do from Lang chain oops from langchain dot text splitter we're going to import character text splitter like that okay and this is the one that we're going to be using to divide our text right here so first of all we're going to create a new instance of it we're going to say that our text splitter is going to be a new instance of character text splitter and actually character text splitter takes several parameters the first one as you can see right here is the separator and we're just going to set it to say to say that this separator is going to be a single line break then we're going to say the chunk size like this now in this case we're going to set the chunk size to a thousand which means a thousand characters and then we're going to set a chunk overlap [Music] to 200 okay just to be clear the the chunk size is the the size of the chunk like this so if you start here a thousand characters probably going to go somewhere like here and then the chunk overlap is basically just to protect you whenever your your chunk ends in a place like this you're going to want to take the previews I mean you're not going to want to start the next one right here because you're going to lose all the meaning from this sentence so the overlap is just going to start the next chunk a few characters before so if it's 200 it's going to start the next chunk 200 characters before to be sure to to contain all the full sentences and to contain all the meaning that you need in or in a single chunk okay um so that's the chunk overlap and then the length function length function like this is going to be the length length function from python okay um and then basically we're just going to create our chunks from our text splitter we're going to say split text and we're just going to pass in the we're just going to pass in the text right here like that and then we're just going to return [Music] the chunks like that so if I'm not mistaken right now we should have this element right here that contains our split text method and that will just return to us a list of chunks of about a thousand character size and with an overlap of 300 so let's see how that looks so now that we have this right here we can just St write it and to display it in our sidebar right here so let's just refresh this page right here and let's load again our two documents we can click on process and there you go so now you have the first chunk is this one right here the second one is this one right here and as you can see this one starts like here okay so that's the overlap in action so there you go now you have all of your chunks divided and now it's time to actually use those chunks to create the vector store okay so now we are we created this and now we are going to do this part right here and it's very quick so bear with me right all right so now it is now that we have our chunks of text what we're going to want to do is we're going to create our embeddings okay now our embeddings if you remember correctly are here it is this part right here which is The Parting we create the vector representation of our chunks of text in order to store them in our database so that we can run this semantic search to find similar chunks of text that will be relevant to our question um I'm going to be showing you two ways of doing this the first way I'm going to be showing you right now is by using open AIS embedding models okay now this is paid for so you have to keep that in mind in your business model if you're going to be loading documents that are thousands of pages long but the prices are right here if you go to openai.com pricing embedding models like the latest one is like ridiculously cheap anyways I remember someone saying in Twitter that you could embed the entire transcription of Joe Rogan's podcasts for 40 euros 40 dollars or something like that so I mean it's not super expensive but it's definitely something to keep in mind in your business model okay um that's the first one and the second one that I'm going to show you is a free one called instructor and this one is actually very good however you do have to have the I mean it's going to be way slower if you're just running it in your computer and expecting it to to just make all of the embeddings in your CPU you you got to have a GPU or something like that in order for it to be performant however something that I wanted to mention that is that unlike with language models like GPT 3.5 or gpt4 were pretty much all the Benchmark for how a language model should be so far like open ai's language models are undoubtedly the best on the market so they're like other models are measured compared to them however when it comes to embedding models they are not at the top if you see right here the official I mean leaderboard from hogging face you can see that Adis I mean open AIS model which is Ada V2 here it's actually on six position and instructor which is the one that I'm going to show you how to use is in second position and it's probably the one that I would recommend if you have your own Hardware but yeah just keep that in mind that instructor you're gonna have your own hour your own Hardware or I'm not sure if they uh make it available here on interface API but it might be available on hogging face but yeah just keep that in mind and let's get to doing the hugging the open AI one for now all right so let's now create our Vector store from this text chunks right here using open AIS embeddings uh so I'm going to do vector Vector store equals get Vector store and we're going to do it from the text strings all right and then we're gonna have to Define this function right here I'm going to Define it down here there we go and this one actually is a very simple function um since we're going to be using for now open AIS embeddings we're going to do from Lang chain dot embeddings import open Ai embeddings and we're also going to be using files as a vector store okay files is pretty much just like Pinecone or like chroma or whatever it's just a database that allows you to store all of these numeric representations of your chunks of text okay the different thing about files is that it runs locally so in this case we're going to be storing all of our generated embeddings in our own machine instead of in the cloud or something like that so this is going to be erased when we close the application probably in another video I'll show you how to use a an external persistent database but for now let's do line chain dot Vector stores we're going to import files there you go so here we're going to say that the embeddings embeddings are going to equal open AIS oops open AIS embeddings and then we're going to say that the vector store is going to equal our files and we're going to generate it I mean right now we're creating the database okay we're going to be generating it from texts okay because we have the chunks of text right here and this one takes two parameters the first one is the chunks the actual text which are going to be our text chunks chunks of text and the second one is going to be the embeddings and here we take the embeddings that we just created all right so and this is the one that we're going to return return Vector store there you go um so yeah there you go now we have successfully created this Vector store right here by using open Eis embeddings and let me just show you how fast that is because we are sending just the chunks of text to open AI servers and then it is them who are doing all the heavy lifting um so let me do this streamlit run um one second stream late oops streamlit run app.vy let's see how that looks like there we go we have the application running right here I'm going to upload the same files as before I'm going to click on process and let's see how long it takes to process remember that we're using the open Eis API key that I set before and now it's ready all right so that wasn't very long um and it's open AI servers which did that if we want to do it in our own computer we can use the Transformers sorry the instructor API and and let me just show you how to do that right now all right so now I'm going to show you how to do exactly the same thing I mean remember that we have just successfully created our Vector store which is our database with all of our embeddings but do that for free okay before like just a moment ago I was charged for what I did for embedding the 20 Pages ish of my two documents all right so now we're going to do it in my machine and we're going to do it for free so let's see how that looks like we're going to be using just as I mentioned before the instructor embeddings and as you can see it is actually ranked higher as I mentioned before it's high ranked higher than open Eis embeddings um so let's click on that one and this is the one that we're going to be using however something that's important to keep in mind is that I forgot to mention that you have to install a couple more dependencies for you to use this so you're going to do pip install and the two dependencies that you're going to need are first of all the instructor embedding which is the main package that we're going to be using and then sentence Transformers which is just a set of dependencies that our instructor embedding is going to use however do keep in mind that this was super fast for me but these are pretty heavy packages so don't worry if it takes several minutes for you to actually finish downloading and then once that's installed you can actually come right here it's it it's inside the same module LinkedIn embeddings you come right here and you say hugging face instruct embeddings and here just instead of doing open AI embeddings we're going to initialize a new embeddings and this time we're going to call it from hugging face instruct embeddings okay and this one actually we're just going to pass in the model name model name and the model name for this one is exactly this one that you see right here remember that this is the one that we're going to be using so we just take this and we just paste this right here and now we should be able to pass in this embeddings that we just created from our hugging face embeddings into our Vector store right here and it should work however as you will see it's going to be way way slower let me just show you what I mean so if I do streamlit run app.py I'm going to put this on the side like this I'm going to get rid of this I'm going to show you on the terminal right here and if I bring my application up here um this as I mentioned before this is barely 20 Pages or something like that and remember that just before for open AI it took like four seconds for it to embed everything now if I click on process right here you'll see that it starts to process it loads the Transformer and as you can see it's loading in my computer okay so it loads the Transformer then you will see that it loads the it loads the CPU Etc and so here you have like the start time all right it started with my CPU I'm going to pause the video right now and let me just show you in a moment how much it actually took to embed only 20 Pages all right in my computer using CPU I don't have a GPU connected to this one right now so let's see all right so it finally finished uh it didn't actually take that much that long it took two minutes to do it in my computer however like do keep in mind that this can easily scale up if your computer is not um powerful enough or if you're just running on relying on a CPU okay so yeah so that's how to do the instructor embeddings in your application now what we're going to do is we're going to we have successfully finished all of this part we can actually do this and Langston actually allows you to do that super quick in just one single chain and we're going to include memory in it super quick as well so let me show you that in just one moment all right so now it's time to start creating this part right here and it's actually super quick and super simple because langching provides a chain that allows you to do this out of the box okay and it's actually very convenient because you can allow it allows you to add memory to it so this means that you can ask a question about your document and then ask a follow-up question about that about the same thing that you're talking about and the robot is going to I mean the chatbot is going to know the context of your question okay so let me show you how to do that real quick we're going to have to come right here and then we're going to have to create an instance of this conversation chain okay so I'm actually going to create a new function for that I'm going to say conversation create conversation chain so I'm going to call this conversation I'm going to store it in I'm going to create a new function just like we were doing get conversation conversation chain and this one is going to take my Vector store okay there you go so now let's just like before we're going to have to create this function up here um Define there you go and right here we're going to have to initialize a few things first of all since we're going to be dealing with memory I mean a chatbot that has memory we're going to have to initialize a an instance of memory in order to do that we're going to have to import this from launching and it's called conversational buffer memory okay so from launching dot memory we're going to import conversational buffer memory so there you go and now we can successfully initialize it right here we're going to say memory equals conversational buffer memory that we just imported like that um there you go and we're going to set this one in order to initialize it we have to set a memory key first of all I'm just going to call it chat history and let's say that it returns message let's set that to True okay if you want to know more about how memory Works in langchang and how you can use buffer memory or entity memory or other kinds of memories be sure to check my video I have a video especially about that so check it out and then once we have this memory right here we can actually initialize the session so conversation conversation chain let's call it and we're going to say that this equals the conversation the conversation of retrieval chain that I actually haven't imported so I'm going to say that from langching Dot if I'm not mistaken is from chains we're going to import conversational retrieval chain which allows us to chat with our switch our with our text with our context with our Vector store and have some memory to it so in order to do that first of all we're going to say that we're going to say that this one is going to be conversational retrieval chain from language model and this one right here takes a few things the first one is the language model that we're going to be using in my case let's use let's let me just initialize it right here the language model is going to be open AI open AI I'm going to import it from here blank chain from langchain.lms import open import open AI there you go so now I can use open AI this one's going to use DaVinci actually you know what let's use chat model a chat model instead chat models where you're going to use chat open AI and here we're going to initialize it with chat model there you go now we can use this llm right here um so the first argument that our conversational retrievable chain is going to take is the language model so I'm going to say llm equals my language model the second argument is going to be the vector store or the Retriever and I'm going to call my Vector store that I took right here I'm going to say as retriever there you go and then my memory is going to be the memory that I initialized just a moment ago okay and now this is my my conversation chain I'm just going to return it return my conversation there you go now something important to keep in mind right here is that we have just created our conversation object okay this one right here is going to allow us to to generate the new messages of the conversation this one in like in a very uh high level explanation all it does is it takes the history of the conversation and it returns you the next element in the conversation okay so this is the one that we're going to be using in the entire application later on right here so it is a good idea to have this persistent during the application something about streamlit is that whenever something happens like someone clicks on a button or someone submits a something on an input field or something like that um extremely tasks a tendency to reload its entire code so if I click on something or just submit a text input or something it's going to try to reload the entire thing and then that means that it's probably going to re-initialize some variables so if I don't want that to happen if I want some variables to be persistent over time I can do St session State and that way the variable is linked to the session state of the application and the application knows that this variable is not supposed to be re-initialized okay in this case we're not going to be doing this especially because of the originalization because this is only triggered when we click on the button however this is also useful when you want to use a variable or an object during the entire application so as you can see right here we initialize the conversation object right here but we may want to use it outside of the sidebar and that would be outside of the scope of the of this of this piece of code so a good I mean a good thing about the session state is that you can just use it outside of it if you go to session State DOT conversation oops conversation this is going to be available outside of the scope so that's a good way just to to use some pieces of some some objects outside of your of your scope if you're using streamlit uh something important about this is that it is a good practice that if you're using a session State object you initialize it before so we're going to come right here we do conversation and we're going to test that if it's not in my session State sorry session St dot session state if it's not in my session State we're going to initialize it and we're going to do sd.sessionstate that conversation equals none all right so this way when if the application re-runs itself it's going to check if conversation is already in the state the session state of the application and it's going to reinitialize it I mean it's going to set it to none if it's not been initialized and if it has already been initialized it's not going to do anything with it so now we can use it anywhere in during the application um so yeah so that's uh something important to keep in mind we're going to do the same thing with the history of the chat messages but yeah just so you know how to make your variables persistent during during the entire life cycle of your application okay just to make it clear this is not about refreshing the application it's it only lasts during the session of the application which is while the user is well the application is open all right for some reason extremely just reload some code um from time to time so there you go there we go all right so now that we have done this actually I'm gonna show you how to display messages all right in a previous video I showed you how to do this using a package from streamlit called streamly chat which is pretty convenient if you want to get it up and running real quick however I'm going to show you a different way to do this right now and it's basically just inserting a custom HTML into your application okay so if you're at ease with HTML this is probably a good idea for you if you're not probably not but um I mean it's pretty convenient um here I'm going to create a new file right here I'm going to call it let's say I'm going to call it HTML templates how about that um I'm going to call it HTML templates Dot py and this is some code that I had already prepared so what we have here is basically just the Styles The Styling from I mean the CSS styles that are going to uh style these two classes chat message and Bot and we have two templates so this is going to be the template for the user and this is going to be the template for the bot okay and as you can see I already added some images right here for the users but you can add your own just by replacing what's here in this Source part as well all right and yeah actually I don't need this part anymore now that we now that I think of it there you go so here you go and here's the message and this is the part that we're going to be replacing actually I don't like it this way I usually write my variables like this there you go there you go now actually we can we can save this and we can import these three elements into our application on this side so we're going to say that from HTML templates we're going to import CSS we're going to import bot template and we're going to import user template okay and remember that our CSS we're going to have to add it up here because the CSS is going to just like in a website you have to add your CSS on top so we're going to add it here we're going to do St write and we're going to add our CSS and we're going to say that HTML it's going to allow unsafe HTML okay and then just to show you how it works outside of the sidebar I'm going to add it right here underneath the input element I'm going to say SD right we're going to say that this is the message the user message user template and then we're going to say that we're going to allow unsafe HTML this is only to tell um to tell streamlit that it's supposed to show the HTML inside of it okay otherwise it's just not going to uh parse the HTML as HTML and this one's going to be the bot template okay and last but not least let me just show you real quick how to replace this thing inside I think that in Python the function is um replace yeah so in Python what we're going to do is here we're going to say dot replace and we're going to replace the message with my message here it's going to be hello human and here we're going to do the same thing we're going to replace MSG we're going to say hello Robert there you go let's see how that looks now if I refresh right here I should have there you go hello robot and hello human as you can see this is pretty much this looks pretty professional like a chatbot and here I am and here is the human here's the bot and all I had to do is replace the message variable inside of here with my user template with my personalized message out here I suppose that you can start to see how this is going to play when we replace this with our own sorry with our own message okay so let's do that right now um with our end conversation so let's create the conversation and just use these template to display the new messages all right let's do that all right so now it's it's time to actually generate the compensation all right so what we want to do is what when the user fills something in here we want to be able to handle that input okay so what we're going to do actually first of all I'm just going to get rid of the hogging face instruct embeddings because that's just too slow and this is just for demonstration purposes so I'm going to be continuing I'm going to continue using open AI embeddings for now but now you all at least you know how to do using hugging face instruct embeddings okay so right now what I'm going to do is I'm going to come right here to my text input and I'm going to handle the submission okay the thing is we do user question to store the value from the input in the user question and then we do if user question then this is only going to be triggered if the user submits the question we're going to handle user user input and we're going to pass in the user question like this and just as before we're going to create the function up here okay there you go and here actually it's this is pretty interesting we're going to be using the variable that we created just a moment ago in the sidebar this one right here we're going to be using it to generate the answer to the user's question and that's actually it's very simple to do that the way we're going to do this is we're going to right here we're going to say the response is going to be equal to SD session State and here's where we're going to be calling the conversation the conversation and right here we're going to be doing we're going to pass a key value pair of question and right here we're going to pass in the user question okay and now inside let me just show you what this looks like so St right no response okay so now when I click when I submit a question from the to the user I mean when the user submits a question I am going to handle that input and I am going to write the response from the language model and remember this conversation chain already contains all of the configuration from our Vector store and from our memory this means that this one right here if we use it again is already going to remember the previous question I didn't since I already set up the memory this is already going I mean if I keep asking questions it's it's already going to remember the previous context all right so let's see what this looks like I'm just going to refresh this and I'm going to bring I'm going to bring my two test files right here again I'm going to process them since I'm using open AI oh no I didn't rerun open AI I think it's oh no yeah okay um so I can now do um what is the first Amendment about then if I click enter it's supposed to tell me the answer to that however it's going to return an entire object with a lot of things right so here you have it it has the answer and it also has the entire chat history and that is what's important to us because remember that we want to we want to submit the entire history of the chat right so we're going to take this object right here and we're going to show everything up here everything in the chat history up here down here formatted with this template right here okay so let's do that what I'm going to do then is I'm going to just remove this part right here and I am going to create a new session State variable and this one is going to be session state DOT chat history like this okay and this one is going to be equal to my response and it's going to be equal to my response object chat history like this there you go and now this one is the one that I am going to be displaying now let's say for I message in enumerate s t um as teach at history like this this is basically allow me to Loop through the entire chat history with an index and the content of the index and if I if I mod 2. equals zero mod not and if I mod 2 equals 0 we're going to S to write we're going to SD write our user template our user template and remember just as before we're going to be replacing message with the message Etc so let me just paste this right here replace message but this time we're not going to be replacing it with something in particular we're going to be replacing it with the message and where is it located inside the message inside content so this is the entire message that we're looking through and I want only the content of the message so I'm going to do message dot content like this right here and this is going to since I did mod 2 this is only going to take the odd numbers all right of the history and then else for the pair numbers of the history we're going to SD write as well but this time it's going to be the bot template we're going to replace it as well and there you go now we can replace we can delete this part right here and now if I save this it should work now oh one one thing actually just remember that when you are using session State you have to initialize it up here okay at the beginning of your application so if um if chat history doesn't exist oops if chat history not in not in St session state we're going to initialize it to none so that we can never start using it without actually having it being initialized okay all right so let's go and test it um right here I'm just going to drop my two test documents right here I'm going to click on process and let's see how this works now it seems to be high to be processed I'm running the Bill of Rights so let's supposed to know what does the First Amendment say let's say if it knows and now it's supposed to be actually displaying the message templates that I created before okay so there you go what does the First Amendment say the First Amendment stage blah blah blah and then let's see if it rasps some sort of um context here so if I say how about the second one then if I click this let's see what it gives us going to put myself here and there you go so it knows that we're talking about the Second Amendment because we were talking about the First Amendment before okay so it has some sort of memory it has this um it has this chat like structure and yeah I mean I hope that you found it useful I hope that there's been educational uh let me just show you super quick how to do the same thing but using hogging face models instead of open AIS models because right here remember we used uh chat open AI but actually we can do pretty much the same thing with um with uh hogging face okay so in order to do that I'm just going to copy this that I have right here and I'm going to paste it right here so it's pretty much the same thing but we're going to have to import hugging face hub so from length chain Dot language models we're going to import hugging face hug and face Hub and now we can use it right here and I in this case I'm just using Google flight T5 uh but like you can use any language model that you have right here right so if you come right here and you find a model that you that you want to try with this uh with this structure you just have to remember that we installed before hugging face so hugging face Hub so this comes with that you don't have to install anything um I think you have to install anything else but in any case just read the errors it usually gives you exactly what kind of what dependency you're missing and that you have to install but yeah I mean just write the repository ID right here from uh Facebook Dyno um I don't know I mean just choose a language model that works and then you set the temperature for this one in particular temperature other than I mean temperature zero was causing problems but if you put it to anything other than zero it's supposed to be all right so I'm just going to test it like this and show you how it works um just going to come right here I'm going to refresh this let's just load the two files again and once it's processed I should be able to ask about the same uh First Amendment okay so what does the first Amendment saying and it's supposed to be calling yeah Congress shall make no law respecting establishment of religion prohibiting Free Speech there all right I mean it's less I mean here I passed in Google 25 but feel free to use in the language model here I am not running this locally I am using the hugging phase in inference API so that means that this is working pretty much just as open AI I'm sending the request to hog in face and I'm getting it back um but this is free and it's limited just for testing okay so yeah I hope this was useful for you I hope that you enjoyed it um I hope that you have a very nice uh project that you can now show your employers and your clients and I hope that you start creating really nice uh productive and beautiful applications like this to solve real world real world problems if you want to see more of this be sure to subscribe and if you have any questions just let me know in the comments and congratulations for following up the the end it was It was a uh some sort of uh long and complicated uh I'm a little bit more complex project than the ones I had done before let me know if you like this kind of project but I'm going to be continuing publishing uh content for beginners as well so thank you very much for watching and I will see you next time [Music] [Music] thank you
|
LangChain Crash Course - Build apps with language models
|
2023-04-09 00:00:00
|
Patrick Loeber
|
hi everyone I'm Patrick and welcome to this new tutorial about Lang chain Lang chain is a framework for developing apps powered by large language models for example if you want to build your own chat GPT app based on your own data then this framework is perfect for it but there is a lot more to it so here you can see all its key functionalities which are divided into different modules so there are models so you can access different models with Lang chain then there are prompts so here you can easily create your own prompt templates then you can manage memory with it then there are indices this is needed to combine the language models with your own text Data then there are chains so this are sequences of calls for example you can combine multiple different models or prompts and lastly there are a chance this is super powerful for example you can tell an agent to access the Google search so in this video we go over all of these module wheels and in the end you should have a great understanding of how this framework works and then you can hopefully build your own AI apps with large language models so let's get started installation can be done with Pip install link chain and then later when we want to use specific llms for example or a specific Vector database and we also have to install the corresponding packages and we will get to this in a moment so the first core functionality is a generic interface for different llms and we can have a look at the different Integrations here for example we have open AI we have cohere we have hugging face and a lot more most of these Integrations work via their API but we can also run local models with Lang chain so let me show you how to use open AI in this case we also have to install the python SDK and then we have to set our open AI API key and then so you can either do it like this in Python code or you can set this as your environment variable on your local system and then we can import the open AI interface from Lang chain and then create our open Ai and here we can set different parameters for example we can also set different model names so this is the default model currently and then we can give it a text and run the model and this should give you the same output as you would get with the official open AI API so here it provided a or it created a company name so let's run this again and see that we get a different output so yeah this is the company name it suggests now let me show you how to use the hugging face Hub as a second example in this case we have to set the hugging face Hub API token so you get this at the hacking face website then we import the hacking face up and then we create our llm by setting the repo ID in this case we use the this model from Google and then here again we can set different parameters and then again we can run our llm in this case we set we say translate English to German and then the sentence so this works so now you know how to access different models with Lang chain the second important functionality is prompt templates so link chain facilitates prompt management and optimization because often or most of the times we don't want to pass the question directly to the model like this so here we ask can Barack Obama have a conversation with George Washington so let's run this and see what we get and the output is Barack Obama is a current President George Washington was a past president so this is not quite correct and it also didn't answer the actual question so a better way to design our prompt is to say question colon and then we use the actual question and we also say let's think step by step and then we say answer colon so let's pass this to the model and see what we get and now the answer is George Washington died in 1799 Barack Obama Obama was born in 1961 so the final answer is no so this time we get a correct answer and Lang chain makes it super simple to create these prompt templates so for this we can say from Lang chain import prompt template this is the most basic one then we specify our template where we Define a placeholder like this and then when we create our prompt template we also give it the input variables and this is a list and now here we have to use the same names that we used for the placeholders so now let's run the cell and then we can for example say prompt dot format and then we use the same name as an argument here so the question and then again we give it the question and now you see this will be our final prompt but now we cannot directly pass this prompt to the llm so if we run this then we get a type error so now to combine a prompt template with a model we have to use a so-called chain so with chains we can combine different llms and prompts in multi-step workflows so here we can chain multiple models and prompts together and there are a lot of how-to guides for different use cases so the most basic one is the llm chain but for example there are also chains for conversations or for question answering or summarization so for this have a look at the documentation and now let's have a look at the llm chain so here we import this then we create this and here we give it the prompt template and the llm as a parameter then again we here we create the same question and then we say llm chain dot run and then only give it the question and remember this question is now we passed to The Prompt template and then the final prompt is given to the llm so if we run this we should again get a good response and as you can see again we get a correct answer so yeah this is how to work with chains in a lang chain now let's talk about agents and tools this is another core functionality in Lang chain that can make your application extremely powerful so with this we can solve very complex questions and tasks and in this concrete example I want to show you for example we will ask the model in what year was the film departed with Leonardo DiCaprio released and then what is this year raised to the 0.43 power so often your model cannot answer this on its own and for this it can access different Tools in this example it will Access Wikipedia to look up the the film and then it will use llm math to perform the actual math here so this is super powerful if used correctly and now first let's talk about how this works so agents involve and llm making decisions about which actions to take fake than taking that action seeing an observation and repeating that until done and for this we have to differentiate between tools and llm and the agent so the tool is a function that performs a specific Duty and this can be things like Google search a database lookup using the python repel Wikipedia the llm math and more then the llm is the language model that powers the agent and then we have the actual agent so first let's have a look at some of the supported tools so for example we have chat GPT plugins that can be used then the Google search the python wrapper requests the Wikipedia API well from Alpha and some more and then we also have to differentiate between different agent types so right now we have these four available the one that you probably see the most is the zero shot react description type so this will determine which tool to use based solidly on the tools description and now let me show you how to use this so we import load tools and we import initialize agent then in this case we want to use the Wikipedia tool so here we also have to install the python package then here again we set up a model and here I have to say that the agents and tools works best with the open AI models so let's create our llm then we say load tools and here as a list we can use all these supported tools again you will find their names here in the documentation then also you will give the llm that will power the agent and then you say initialize agent with the tools the llm and then the agent type and then we can give it our complex question so let's run this and see what we will get so here we get the whole output and here we can follow the thought process for example the model said I need to find out the year the film was released and then use the calculator to calculate the power so the first tool it wants to use is Wikipedia so here it requests Wikipedia and then it says I now know the year the film was released and then the next action to take is to use the calculator so it uses the calculator and gets the math output and then it says I now know the final answer so the final answer is the film departed with Leonardo DiCaprio was released in 2006 and this year raised to the 0.43 power is this value so yeah this is correct and as you can see this concept with agent and tools is super powerful and allows a lot of complex questions or workflows that you can do with your models the next important Concept in Lang chain is memories so with Lang chain we can easily add state to chains and agents the most popular example for this is of course if you want to build a chatbot and we can do this very easily with the conversation chain so we import this then again we create a model and then we create our conversation chain with the model and then we can say conversation predict and give it the first input so let's run this and since we set where both equals true we can have a look at the whole output so first of all you will see what the conversation chain will do this will first of all format The Prompt like this the following is a friendly conversation between a human and an AI the AI is talkative and provides lots of specific details then it says current conversation human said this and AI responded with this and then we get the response hi there it's nice to meet you and now if we run this again with the next input can we talk about Ai and now again we can see the whole prompt and here we can see the whole current conversation so it remembered the previous questions and answers and then again the answer is here absolutely what would you like to know about Ai and yeah this is how easily we can add memory to a chatbot the next important module is document loaders and so with document loaders you can load your own text Data from different sources very easily into your app and then feed this to your models so let's go over the supported document loaders here is the whole list and you can see there are quite a bunch of them so for example we have a CSV loader an email loader we have Evernote Facebook chat then we have HTML of course markdown notion PDFs PowerPoints you can load easily also URLs and for each of them it's super simple to set this up for example let's click on the PDF loader and then this is how you would use them so you import the loader then you set this this up and then you say load or load or in this case load and split in this notebook I for example use the notion directory loader then we give it the notion database and then say loader.load and now we have the raw docs but now before we can feed this to the model we need to understand indices so indices refer to ways to structure documents so that llms can best interact with them and the indices module in Lang chain contains utility functions for working with documents and in order to work with documents we have to understand these three concepts first we have embeddings so when embedding is a numerical representation of your data for example of your text then we have text Splitters that split long pieces of text into smaller chunks and then we have Vector stores so this can can be different Vector databases we can use and with this we can understand the meaning of the data and then for example have more accurate search results and usually in order to feed our own data to the model we need to combine all of these Concepts so let's go over a concrete example and then it will become more clear so in this example I want to load a text file that I download here so a DOT txt file and the first step is to apply a document loader so there's also a dedicated one in this case we use the text loader that we can set up here then the next step is to apply a text splitter so again there are different ones available in this case we use the character text splitter then we set this up and call splits documents then the next step is to set up embeddings and again there are different ones available in this case we use the hugging face embeddings and for this we have to install this third-party package and then here we simply create them and then the last thing is to use a vector store and there are different ones that are supported in length chain for example you could use elasticsearch Phi spine cone or vv8 in this small code snippet I use the vice Vector store so here we import this and then we call files from documents and then pass the docs and the embeddings and then for example you could easily perform similarity search so you could ask what did the president say about kitanji Brown Jackson and then this is the most similar result that it finds so the most similar text Chunk and as you can see here we also see the name so this is working and yeah this is typically how you would load text into your app so first you use a document loader then you use a text splitter then you use embeddings and lastly a vector store and in order to under stand is better there's also a very cool end-to-end example that you can check out this is the jetlang chain repository the link will be in the description below so yeah these are the most important Concepts you should know about Lang chain alright so I hope you enjoyed this tutorial if so then drop me a like and then I hope to see you in the next video bye
|
⛓️ LangFlow: Build Chatbots without Writing Code - LangChain
|
2023-05-24 00:00:00
|
Prompt Engineering
|
today I want to show you a platform where you can create powerful chat Parts without writing a single line of code these are just drag and drop components which you can connect them together to create chatbots if you have been following my channel I create a lot of contents using blank chain but to make it work you need to write quite a bit of code pen python or JavaScript so I wanted to look at platforms that let you build applications on enlarged language models without creating any code I came across these two different options the first one is called flowwise which is using Lang JS they build a pretty nice you user interface on top of it the second one is called lag flow and it's using a very similar concept so you have a drag and drop components which you connect them together to create applications now the interfaces both in Lang flow as well as lengthwise looks very similar so in this video I'll show you how to install one of them and we will be specifically focusing on lank flow in the subsequent renew I might create a video for flowwise as well I'll walk you through a step-by-step installation process and then I'll show you how you can create different applications and we will look at some more uh complicated examples as well now in the background both flowwise and lag flow they are using a length chain and it's just a wrapper around it if you're not familiar with Lang chain and or want to understand the basic concepts so I would recommend you watching this video Let's quickly look at the installation process I'm on Windows and but the installation is going to remain pretty consistent across different platforms unfortunately you still need python in order to run these UI based platforms so first I recommend to install a python as well as conda and create a new virtual environment for this specific project so in order to create a virtual environment we're going to be using condos so I'm going to say conduct create then Dash end and now we need to give a name to our new virtual environment so I already have a virtual environment called flow but you can give it any name that you want once you create your virtual environment then you need to activate it so we're going to use a condor activate and then the virtual environment that I already have so I'm going to call it lag flow click enter right and you see that we are now inside our virtual environment okay next we will need to install the landfill package so we're going to be using pip install and then link flow I have already installed it so you're going to see that it says the requirement satisfied but if you're installing it for the first time this step is going to take quite some time okay next we just need to run this command which is python um Dash M link flow or you can just type in lag flow both of them should work so let's see what happens so it simply started a web application on this specific IP address which is your localhost and it's using 7860 as the port simply go to your processor and type in that address and you will see that the application is up and running now in the background you're going to see everything that is happening in here so it's already throwing some error messages we will look at this in a little bit why that is happening now the basic structure so it has some components these are all the components that are available within Lang chain and as as I mentioned before it's using lamp chain in the background so you can uh come here and switch between the light and dark theme I like the dark theme so I'm going to keep this for the rest of the video this is a very simple drag and drop and drop interface you simply drag different components connect them together to create applications so let's look at different components that are available so there are a few agents so CSV agents Json agent then in terms of chains so these are the chains that are available within Lang chain so I think not all of them are available but for example you can create a llm chain or conversation chain right then for loading different types of documents they have a pretty comprehensive list for example there's a pi PDF loader there is a text loader uh if you want to interact with website so there is a web-based loader as well we will look at a couple of examples just in a little bit right then in terms of embeddings right now uh they only seem to be supporting oi open AIS embeddings but this might uh actually change they probably should include support for other ones in terms of llms so there's a chat open AI hugging face Hub llama CPP open AI so this is good uh it's a good start then there are modules for memory as well so if you want to add memory to your chatbots then in terms of prompts so let's see what do they have they have a prompt templates that's nice and then zero short prompt a few short prompt template this is good it's usable useful and then there's a character text splitter this is important for splitting your documents especially um if they are exceeding the context window of your large language model okay so they have pretty nice set of tools that you can interact with these tools are enough to start experimenting with large language models so this is great for creating quick prototypes so let's look at uh how we would do that so for example let's say I want to create a chatbot so I'm going to using I'm going to use the conversation chain right let me make it a little bigger so it's visible okay and then uh in order to connect this conversation change we need a large language model so to keep things very simple we will start with the chat open AI That's the basic chat bot from open AI okay now uh if you see the conversation change so it expects two inputs one is the memory and the other one is the llm that's the larger language model the output is going to be a conversation chain right uh for this specific module if you see it has just only one output which is the Open chat open AI right now you can simply connect these together uh for some reason the connection is not really visible but there is a connection let me switch it to light mode so you can see we Simply Connected these two points now they are connecting together connected together now in order to experiment with this they have provided this simple chat interface so whatever workflow you create and if it works then you will be able to use this as a chat here right and um the good good thing is they also have this code section so it shows you if you create a workflow how you can use this on your own python code if you want to par integrate it as a part of an application right so here is how it will work with the API so right now our flow is called New flow so it created it will create a Json file for that right if you want to directly use the um python version of Link flow so you simply uh load your Json file and then you can ask it question we will look at this in more details in a little bit so first let's look at here um for this chat open AI we have uh four different options which is 3.5 turbo that's the chart GPT gpt4 and GPT 32k right now I only have access to uh the chat GPT version so I'm going to be using that then you can set set the temperature so let's keep it to zero and then you need to provide the opening API key you can get the open API key from your account so I went ahead copied that let me paste it here right and I think we should be all set now if you notice this turned to a green tick which means that we are ready uh to use this workflow now in order to use it you need to Simply go here right and let's ask a question so I'm going to say what is the capital of USA okay and uh this should get a response from uh chat GPT so it's processing it you can actually look at the command line uh where you can see everything that is happening okay so it came up with the response saying that the capital of us is Washington DC which is correct and then it provided some more details because uh it's using chat GPT right and here is uh in the back end so it's using a prompt template and the prompt template is the following is a friendly conversation between a human and AI the AI is talkative and provides lots of specific details from its context right and then uh it's simply providing the prompt that we give it and the AI response uh once it's execute the prompt template then it says finish chain right so if you're familiar with link chain prompts or prompt templates this will look very familiar okay so this is how you create a really basic chatbot but let's add uh some memory to it because we want it to remember previous conversations so we'll go to Memories then I can select this conversation buffer memory right and this has a history key so what's going to happen is it will provide the prompt as well as the history of the conversation back to uh our uh chargpt right so we're going to connect this and and let me just switch it again yes it's connected right so now uh this specific workflow should have memory included just make sure that you wait for on this to turn green alright so it's now ready to use so I'm going to go back to my chat interface and I'm going to remove the previous chat I just ask the same question again what is the capital of USA all right and let's see what it comes up with so again it says the capital of the USA is Washington DC this is great all right let's ask it what is its population right so now it should remember the context that we are talking about Washington DC so this is pretty neat you're able to inter create this workflow or the flow but how do you use this in your own applications um so for that if you go back to their GitHub uh page they have provided an example how you can deploy this on Google cloud and all also on Gina AI Cloud I'm not familiar with this but you can uh do that with them essentially all you need is just the Json file of your workflow now let me show you how you can create that Json file so simply go to export right here you give a name so let's say I'm going to call it chatbot basic right then you can give a description right uh if you want to store the API Keys as a part of your flow you can do that but I'm not gonna do that and then simply click on download flow so here is the example uh Json document that you created right and and you can see that at the back end it's actually using a length chain for everything so for example if you look in here this is what it's calling at the back end so once you create a flow you can integrate it as a part of your python code or your application so for example if you use want to use that specific flow that we just created so you're going to use this code segment now let's say you created a flow you saved it and you want to load it to modify it for that you need to go click on import and then you have two options you can look at a few examples or you can load a local files so we'll click on local files and then you can simply select the Json file that you have already created so that's we're going to open the same workflow in just another tab so this is how you create export Android workflows it's it's a very great tool for quick experimentations and prototyping but now let's look at some more complicated examples that they have we're not going to be running those examples but just look at the flows and they are created so we'll go to the examples and then you can see there are different options or examples that are available so for example here is a getting started basic chat I bought with prompt and history that's what we created then you have a vector store PDF loader so maybe we'll just first look at the vector store so let's look at different components that I've been using as this the part of this Vector store flow so first you have the the web-based loader so it seems like it's loading web pages or the FAQ page from this specific website here is the website they're using in this example next they have a character splatter which simply divides this document into different chunks in this case they are using a chunk size of thousand with a 200 tokens overlap followed by that is a chroma Vector DB or the vector store so it takes these chunks together along with the open AI embedding model and create embeddings for each of these documents right and then it simply have another component which is the information related to the vector store now if you have been following my videos this whole workflow is very similar to what I have been showing uh I actually had a example video on how to create a custom chatbot for your websites so if you want to get a deeper understanding of all the components involved so watch that video now back to the workflow again then they're connecting the vector DB as a part of a chain uh and in this case they are connecting that to a large language model uh which is the DaVinci zero zero three model right so that's going to be the large language model the great thing about this tool is it lets you set up different options so for example you can choose a different um llm if you want to so in order to run a flow like this you will have to provide your open AI key uh put to the embeddings model as well as the llm model okay let's also look at a couple of more examples maybe we'll look at the PDF loader um talking to PDFs right now is kind of a Hot Topic so the rest of the workflow is very similar to what we saw for the vectors from the only change is now they're using a PDF loader right so you can set this up select a PDF document and then it will through it will go through the rest of the flow so I'll be creating a lot more detailed videos on how to build applications and run those I run into a few issues here uh I think there are some bugs still in the code so I wasn't able to run like a PDF uh loader but it might be on my end I might have some issues in the installation um overall it's a great concept and uh if executed well it's a great tool that lets you do very quick prototyping okay play around with it and see if this can be useful in your own workflows I will be making videos on the flow wise as well and compare both these tools if you have any questions or comments please put them in the comment section below uh if you like the video consider liking and subscribing to the channel uh thanks for watching and see you in the next one
|
Using ChatGPT with YOUR OWN Data. This is magical. (LangChain OpenAI API)
|
2023-06-19 00:00:00
|
TechLead
| "all right this is pretty cool so I figured out a neat trick to allow me to feed the personal custom(...TRUNCATED)
|
End of preview. Expand
in Data Studio
- Downloads last month
- 7