ChatGPT For The Dark Web
By John Hammond
Summary
## Key takeaways - **ChatGPT Can't Search Dark Web**: Can ChatGPT search the dark web? The short answer is no, it can't. [00:00], [00:06] - **LLMs Thrive on Dark Web Context**: These large language models can do really awesome stuff when you give it enough context, when you give it the right data. So, what's to stop us from giving our AI robot LLM machine access to some dark web data? [01:32], [01:55] - **Flare Tracks Dark Web Exposures**: Flare is a threat exposure management platform tracking data on the dark web like your exposed attack surface, whether it's credentials, secrets, cookies from info stealer malware or ransomware leak sites or data being bought and sold in marketplaces or cyber crime forums. [02:18], [02:50] - **Vibe Coding Builds Rapid Prototypes**: For a security researcher, it's still something that's fun to get a rapid prototype up and running. We're going to vibe code or AI assisted programming, develop, make, and build our own chat GPT for the dark web. [00:40], [02:12] - **AI Agent Interviews Refine Ideas**: I need you to ask me questions. I need you to refine your understanding to get the best context and try to make sense of what it is that I'm thinking. I literally want you to interview me with multiple choice answers. [05:28], [06:14] - **MVP Queries Dark Web Chatter**: What do thread actors think about lockbit in 2026? The evidence here suggests Llama stealer is still an active criminal topic in 2026 with chatter centered on operations access and sales panels and stolen logs. [16:18], [19:11]
Topics Covered
- Flare Tracks Exposed Secrets
- Interview AI to Refine Ideas
- AI Thrives on Local Context
- Test-Driven AI Builds Validate
- Dark Web GPT Queries Live
Full Transcript
Can Chad GPT search the dark web? Well,
we can answer that pretty quick. Short
answer is no, it can't. But what if it could? What if we could make it search
could? What if we could make it search the dark web? What if we could make our own chat GPT for the dark web? Now, it's
no secret I've gotten a little AI pill.
I'm not afraid to say it. I'm not
ashamed. I think it's pretty cool. I've
been doing some of that old vibe coding, although I don't like to call it that, right? It hurts my fragile feelings. And
right? It hurts my fragile feelings. And
I know we would never really put this in production. At least I hope, right?
production. At least I hope, right?
Getting something official that should really be tested, maintained. We
probably want some actual software engineers and architects doing that stuff, but I am honestly neither of those things. For a security researcher,
those things. For a security researcher, it's still something that's fun to get a rapid prototype up and running. And if
you aren't familiar, I've been doing this lately on recent live streams where I am trying to lean in to that loosey goosey, wibbly wobbly, non-deterministic AI world. And I thought it would be kind
AI world. And I thought it would be kind of cool to talk a little bit about some of that and showcase maybe some of the process in a video because I also want to learn from you. How are you using AI
with all the different harnesses and tech and MCP skills and stuff that are out there? How do people use AI? I know
out there? How do people use AI? I know
for my own AI adventure way back in the early days, right, 2023 or whatever, I was copying and pasting code out from chat GPT. Then when Cursor hit the
chat GPT. Then when Cursor hit the streets, I wanted to dive into that.
I've been into open code. I was in Claude Code, right? When Opus 4.5 was out and about. Now, Codeex and GPT 5.4 is pretty awesome. And honestly, I'm just back in cursor. I like being able
to like see the files in the code, you know. The coolest thing is though that
know. The coolest thing is though that these large language models can do really awesome stuff when you give it enough context, when you give it the right data. So, what's to stop us from
right data. So, what's to stop us from giving our AI robot LLM machine access to some dark web data? Anyway, that's
enough background context. That's what
we're doing in this video. We're going
to vibe code or AI assisted programming, develop, make, and build our own chat GPT for the dark web. Now, for our dark web data set, I am going to leverage
Flare and more specifically the Flare API. Now, Flare is the sponsor of this
API. Now, Flare is the sponsor of this video, and I'm always grateful for their support, but seriously, they have an incredible data set. Look, I know I'm a fanboy, but Flare is a threat exposure
management platform and solution. So,
they're tracking data on the dark web like your exposed attack surface, whether it's credentials, secrets, cookies, things that might have been stolen from info stealer malware or ransomware leak sites or data that's
being bought and sold in marketplaces or cyber crime forums. And it's not just tracking the dark web. It's also just the clearet, maybe leaky Amazon AWS S3
buckets or Google or PBIN or GitHub repositories. They have so much insight
repositories. They have so much insight that it's going to be pretty awesome to loop in their API. So, all that is to say, leveraging their API is going to be really cool to get some dark web insight
and giving that to an LLM, letting chat GPT or whatever run with it. It's going
to be wild, really cool security research. But let's get to the
research. But let's get to the implementation a little bit later.
First, let's set up our environment. So,
I'm inside of a virtual machine because if I'm doing anything with AI, I like to be in a virtual machine, especially if it's in yolo mode, like dangerously run permissions and all. And I've got cursor and everything up and running. And I'm
actually going to just create a new project for us inside of a git directory. That's where I like to put
directory. That's where I like to put these things. And let's make a project
these things. And let's make a project dark web gpt. Now, back in cursor, I can go ahead and open that project. And I'm
going to be honest, I'm really liking the whole voice input or speechtoext dictation. Maybe I'm a boomer, right?
dictation. Maybe I'm a boomer, right?
Maybe I'm talking to my phone trying to like transcribe what I'm saying to my text messages, but I think it's kind of nice just to be able to ramble and rant and then hopefully have robot LLM try to make sense of what we're up to. And I
want to talk about that a little bit more, right? Let me maximize the chat
more, right? Let me maximize the chat here. I add all this preface because I
here. I add all this preface because I genuinely want to know your opinion and how you approach a lot of this. So, let
me know in the comments. But I
acknowledge when folks try to oneshot something or if they are uh hoping okay in their vibe code in wibbly you know just throwing stuff left and right when you prompt AI even if you hand jam type
it you're trying to get across your idea that's in your head and a lot of the times you either can't articulate it well or you can't get it to the point that the robot AI can really read your
mind. same way we do when we have
mind. same way we do when we have conversations. That's something that
conversations. That's something that hey, we're just trying to articulate what we're thinking and then however the other end user end person end of the conversation parses and perceives what you're saying, how they interpret it is
what they're going to go running and round and do. So, I really like to set up some little scaffolding so that it will be able to ask me further questions to refine its understanding. Maybe this
is goofy, but I do think so that I can just word vomit and dump a whole lot of info, it can try to make sense of what I'm saying and find the real signal to noise while I'm struggling to articulate
it even myself cuz I'm usually pretty verbose and long-winded, right? I can
yap. You know, you you've seen the videos. So, let me do this once just to
videos. So, let me do this once just to get the idea across and then I'll speedrun cut up whenever we're doing this for the rest of the video. Hey,
cursor. We are working inside of a new project and if you could, I'd really like to get some rules set up for our environment so that you basically have like an agents.mmd markdown file and you're able to initialize what we want
to do for our project. But ultimately, I want you to be able to get a better understanding of what I'm trying to say.
So, whenever we're working on a new task or effort or development direction, I want you to realistically plan this all out, ensure that you think a lot about it and work through all the different
edge cases. But ultimately, I need you
edge cases. But ultimately, I need you to ask me questions. I need you to refine your understanding to get the best context and try to make sense of what it is that I'm thinking and what
I'm trying to articulate. I literally
want you to interview me. So, however
you wrap this as a skill or a rule inside of our environment, make sure that you literally genuinely provide these turbise multiple choice answers using the ask
question tool for me to be able to refine what I'm saying or had a free form added response for me to then give you more context as to what I'm thinking. Let's establish this project
thinking. Let's establish this project as something that we can get working in and then let's uh start to work. I'm
going to send this and let him cook. But
honestly, again, I want to know your thoughts because I'm sure there are going to be folks that say, "Hey, you know, you're burning a lot of tokens.
You're cramping up your context window."
And I agree, right, to a certain extent, but I do think that does help me just blah and it try to find what's actually important and what's tactical that it can implement for code. So, a couple questions here, right? It's already
doing what I'm asking for. You can go ahead and build out both agents.md and
cursor rules. Always ask me questions and you can do it just about all the time. I want us to be able to work
time. I want us to be able to work together on this thing. And multiple
choice just so we can make faster turns.
I want to be able to set robot in motion, let it do its thing, but if it needs any clarifying questions, then it's able to do that in a quick and easy way. And then I can add that input and
way. And then I can add that input and it lets it go. All right, look at him building out some good agents.mmd
markdown files and even some cursor environment specific rules. Granted, the
best part is, hey, we can open up the explorer and then take a closer look to dig into these, but ooh, what is this project primarily meant to become? Yeah,
we can get into more of that, but ultimately I do want it to be both research experiments and kind of a tool that's working as like a proof of concept. I think it is important though
concept. I think it is important though to design the system, get our framework and architecture going. This helps me structure and shape the codebase to really be what I am steering it to be.
And of course, if we wanted to review a couple of these, we could go take a look at this. And this has even some simple
at this. And this has even some simple good structure for us to be able to keep working with that flow. All right, so now let's get a better idea for how we might be able to actually implement this
kind of idea for a dark web chat GPT equivalent. I know I mentioned I'm using
equivalent. I know I mentioned I'm using the Flare API, so we have a couple things we got to keep in mind. And
honestly, we could just kind of give a lot of the documentation to AI. If it
can use this as a reference, well then it can track down the fact that well, we need an API key. I can grab one of those from the portal and we could use another language like either Python, Golang, even a command line interface that they
have ready or a whole MCP, model context protocol. Shoot. Okay, that that
protocol. Shoot. Okay, that that basically solves the problem. Now, I
know it's a can of worms. Everybody's screaming MCP versus CLI, but why don't we build our own or let at least some uh exercises in Vibe coding, AI assisted programming, and let's do this in
Golang. They've got even some specific
Golang. They've got even some specific guides or examples on how you could do what search for any sort of event, track down credentials like usernames or passwords or maybe even cookies, right, that have been leaked for any
organization. Thankfully, they even have
organization. Thankfully, they even have a whole API reference. So, this could be awesome to honestly just give to the robot, let our LLM rip through it, and it could make sense and work with the
whole interface here. They have some really cool tooling over on their GitHub, including the whole Golang or SDK to be able to work with the API. And
that would also be a sweet reference. So
we could incorporate that and use even the code or specific examples that cursor codeex 5.4 GPT whatever can rip through. That's my whole idea here and
through. That's my whole idea here and approach right is hey can I give it enough documentation enough understanding enough example syntax so that it might be able to make sense as to how it could put together and build
with the parts and pieces that I know I want. And I've opened up the open TUI or
want. And I've opened up the open TUI or open terminal user interface in the spirit of a lot of the recent, you know, command line utilities to be able to interact with like claude code or codeex
or open code. We could use this as our interface. And if we wanted to have some
interface. And if we wanted to have some of that AI reasoning to be able to work with this in the LLM after it's retrieved information from the Flare API, well then why not use the codeex
app server? So we sort of have like a
app server? So we sort of have like a harness harness. Check it out. Right.
harness harness. Check it out. Right.
Here's the codeex app server that sort of kind of acts like an API essentially for us to be able to interact with our AI and have that LLM reasoning and GPT
in the mix in a programmatic way. You
see the vision? You got the idea? All
the kind of gears turning and how we could architect at least steer and structure what we'd like for implementation and now put the puzzle pieces together so that robot sure vibe
code AIS whatever it can crank it out.
So maybe I'm crazy, but I do really like to actually make sort of like a reference folder and then download a lot of the actual libraries or packages that I know. I want our AI robot to be able
I know. I want our AI robot to be able to reference so that it could actually look those locally rather than making a whole ton of web requests to be able to go get some of the insight and syntax examples that it might need. Same goes
for documentation, same goes for packages, same goes for stuff that I just think would be handy for it to know. And if we couldn't get like open
know. And if we couldn't get like open TUI working, maybe if I don't know if the Golang binding still exists, but we could use charm or any of the charm bracelet suite stuff that is out and about. Back inside of cursor, I'm
about. Back inside of cursor, I'm probably going to go through another long verbose like brain dump of all those different ideas, dependencies, libraries, tooling, parts and pieces that I know should be included. And then
we'll go through that planning phase.
It'll ask me questions and then we'll get hopefully a closer semblance of what I really want. I like to add in some oh kind of design philosophy or programming principles to say look this should be
like a modular extensible adaptable versatile flexible architecture and framework organized into packages with smart object-oriented blah blah blah no duplicate code no repeat logic no
hard-coded values stuff that we could make customizable and configurable and extensible and all that as sort of just my ideals for how I would like our program to be maintainable and even like
short tur brief files so that It's able to easily roll through any bit of code and then have that LLM friendly so the AI can kind of turn and burn. And most
importantly, I want to make sure that it can test itself. I want end to-end testing cradle to grave start to finish.
Literally try to see your product as it grows and test and validate that it works. I don't want to like beat a dead
works. I don't want to like beat a dead horse of test-driven development, but letting our robot validate that it's working and doing what it should really saves the day because then it just presents to you something that is as
bundled up the best that it can be. So,
I'm going to ramble and yap and I'll like speed up or cut up this segment of the video so you don't have to uh listen to me drone on. But then we'll see if we craft the idea well enough so that we got a sweet vibecoded whatever AI
assisted programming, whatever you want to call it, MVP, something that we could see in action.
All right, I uh crapped out a whole lot for this thing. Let's see what it does.
Let's see what he makes.
Good. Switching to plan mode. That's
exactly what we want. This way it can interview me and refine and clarify if it has anything that it wants a little bit more of a sharper understanding of.
I'm going to just let this go.
Good. We have some interview questions on like the operating boundary. Uh we
can let this thing be broad. What
bootstrap behavior do we want first?
Since this is for development, I feel like we can do local secret persistence, right? Essentially like aenv file and
right? Essentially like aenv file and auto start and manage the local codeex app server. Absolutely. That way we
app server. Absolutely. That way we could have this uh maintain and be able to rerun and we basically have like kind of a singleton Golang binary solution and uh it will be able to get the AI
magic. It'll be AI all throughout by
magic. It'll be AI all throughout by managing its own singleton for like the codeex app server. All right, he's cooking up a plan. Let's take a look at what it's thinking here. Build a single
go binary that feels like a chat first analyst workstation. It prompts securely
analyst workstation. It prompts securely for the Flare API key when missing.
ensures a local codeex compatible runtime is reachable or starts it when possible. Lets the user ask natural
possible. Lets the user ask natural language in a TUI and runs a single search dark web tool against the flare API. Nice. It even got some notes here
API. Nice. It even got some notes here for what it wants. It has a proposed architecture to build all this out. It
will use some of the charm bubble tea bubbles and lip gloss utility for the terminal user interface. I don't think uh open TUI has maybe golang bindings like I thought I saw online but that's
okay. Codeex runtime management good
okay. Codeex runtime management good flare integration good agent loop TUI validation edge cases to cover cool yeah
you know what I think so make it so let's uh watch these files come to life here so I gave the thing just a little bit more context I did want to let it know
like hey don't forget you can research you can look this up look on the internet if I didn't give you enough for any of the documentation for any of the packages packages and stuff that we want in there. And it is cooking through uh a
in there. And it is cooking through uh a lot of good stuff. I think it has a working at least it said compilable. You
know, it compiles .exe, but it's doing a little bit more polish. And we'll see how far along it comes when we can finally test this thing. Oh, it's done.
Okay. Implemented the MVP and built the Windows executable at dark web-g.exe.
App now has a modular Go structure with all of the little internal syntax scaffolding. It support a mast flare API
scaffolding. It support a mast flare API key entry and local persistence. Flare
search. One important note. Oh, the MVP runtime adapter uses the installed local codec CLI JSON event stream for readiness planning and answer generation. It does not yet speak the
generation. It does not yet speak the full websocket codeex app server protocol end to end. That's I just wanted it to work. The app is still local codecs backed and runnable, but
the specific transport layer remains the main future upgrade point. That's fine.
We could keep spinning our wheels on this, but I did want to see if we could just get something working. So, could I now totally go back into my terminal and
take a look inside of the output folder?
And we've got our executable. And moment
of truth, right, crossing my fingers here. Will this thing run? What do we
here. Will this thing run? What do we got? Flare API key is required before
got? Flare API key is required before searches can run. Paste the key below.
Input is masked and save locally. Is it
really? Okay, that's cute. Let me go snag the API key from the platform that's under profile key is copied. Let
me paste it in. Validating flare access and local codeex availability.
Validating startup. Oh, what are you doing now? Oh, you know, we should have
doing now? Oh, you know, we should have gave it some logs so that it have like a little bit more observability. Whoa.
Dark webb GPT status is ready. It's
ready to help. Ask about dark web activity leaks or thread actors. What do
thread actors think about lockbit in 2026?
Planning the next step up here. What do
you want the planning stage to produce for dark web GPT? A product plan system architecture fee. Uh,
architecture fee. Uh, that's not quite what I was hoping for.
Have you searched the database to give me some insight?
Um, can I run it from the cursor terminal and then it could like see itself and like actually try to run and get it working. Let's talk to the robot
again. All right, so we have a couple
again. All right, so we have a couple changes that we can make. Um, number
one, we could add a heck of a lot more color. I want to make this app a little
color. I want to make this app a little bit more beautiful. And I noticed our current turn seems to be repeatedly asking me, what do you want from this?
like a repo or something one way or another. And it doesn't seem like it's
another. And it doesn't seem like it's actually using the search capability or asking anything from the Flare API and database.
I think the struggle with the command line shelling out is really biting us.
So, I do want to kind of more strongly enforce the fact, look, the Codeex app server is truthfully really what I want.
I think we need to refactor and revise this and hopefully that won't be as brittle as those command line mistakes that we keep running into. So, fingers
crossed this one will give us a little bit more love and hopefully it'll do a little bit better validation to get us a better working solution when the time comes. But it's still fun to see this
comes. But it's still fun to see this coming together slowly but surely. More
than I could have done in however long we've been hanging out together so far.
Okay, now it thinks it's got something a little bit more reliable for the persistent codeex app server client. So
fingers crossed this one will work well for us. Let me try this once again. Say
for us. Let me try this once again. Say
what are thread actors saying about llama stealer in 2026. We'll be able to request through flare. We are searching flare and hopefully we'll have our AI LLM reasoning and it will be able to
spit back an answer to us. If even that comes together, I think we've got an MVP, a potential proof of concept here.
Oo, okay. Okay. Okay.
I can't particularly scroll right now, but look at this.
The short answer, the evidence here suggests Lumus dealer is still an active criminal topic in 2026 with Chatter apparently centered on operations access and sales panels and stolen logs. We
should have like markdown formatting though. We could have like beautified
though. We could have like beautified this. We could add more color if we
this. We could add more color if we wanted to render this a little bit better. But this is cool. Are there
better. But this is cool. Are there
Telegram channels pertinent to Llama Steeler? Okay. Yeah, we're totally
Steeler? Okay. Yeah, we're totally ruining uh kind of our back and forth here. We needed to be able to have like
here. We needed to be able to have like a good scroll for this. But the proof of concept like functionally works. And
again, if we didn't do this in a terminal user interface, it would have been able to have a much more gorgeous and beautiful display. But for funsies, we could refine this. We could make this
better. Okay, I'm tightening it up now.
better. Okay, I'm tightening it up now.
And it's even got now mouse wheel scrolling. So, I think we'll be good. I
scrolling. So, I think we'll be good. I
think this should work. Let's try this.
Dark webb GPT. We connect it in. Ooh.
Are you with me? Yes. What can you do? I
can help with code. Oh, look at that streaming.
What can you tell me about dark web leak sites?
Okay. Okay. Can I like scroll with my mouth? Oh, I can. That's so sweet. What
mouth? Oh, I can. That's so sweet. What
are the Telegram channels associated with Llama Stealer?
thinking running search dark web right up at the top here. Look at that. Oh,
he's got something. He's got something.
First pass was noisy. I'm narrowing down to named Telegram channels and actor controlled handles tied specifically to LMA sales supporter announcements. Whoa.
Search dog web completed. Right up at the top. Look at this. Cross-checking
the top. Look at this. Cross-checking
with public reporting because the chat feed is returning sparse labels without enough context. All right. Maybe I gave
enough context. All right. Maybe I gave it a little too much. Wait, wait, wait, wait wait wait wait.
Oh, that's so cool. Yeah. No, it got him. Look at that. What do thread actors
him. Look at that. What do thread actors say about lockbit in 2026? Look, we just
spoke this into existence, right? Like I
I talked and rambled to my camera and my microphone and now we made this cool thing. It's still searching. It's like
thing. It's still searching. It's like
going iteratively back and forth to run the search over and over again and query and look inside of uh Flare. But look,
it's got like TLP research. What forums
are there the most chatter? Most
reference in the current index results.
XSS exploit rant breach forums. Are the onion links in the Flare data set search results? Can you point me to specific
results? Can you point me to specific records or entries? That'd just be cool.
That' just be neat, you know.
Oh, neat.
Hey, thanks so much for sticking with me on this one. I know it was a little bit weird, a little bit different, a little bit more exploratory. Literally doing
oh, vibe code and AI system programming for a video, which I didn't know how well this thing was going to go. I think
we went through a couple different turns to try to refine, get more closer to the idea that I really wanted, but the final product is kind of neat, not going to lie. terminal command line interface
lie. terminal command line interface using codeodc still local right for your own machine and really genuinely looking at that dark web data big thanks to flare of course sponsor of this video I'm always grateful for their support
give them some love link down below in the video description but chat GPT for the dark web thanks so much for watching everybody please do all those YouTube algorithm things like comment subscribe give some love to flare link in the
video description and I'll see you in the next video.
Loading video analysis...