Transcript: How to Build 24/7 Claude Agents. Easy.
0:00
CloudCode has finally brought us routines, which basically means you can inject a prompt into CloudCode, but it can be running on the web. So, your laptop does not have to stay open. And I'm so excited about it. I've already been playing around with it. I've been migrating my automations over there. But, there are a lot of little gotchas, so I'm here to explain exactly how you can actually set up these automations so that they work. So, today, April 14th, Claude tweeted, "Now in research preview, routines in CloudCode. You configure a routine once, which is basically like a prompt, and it can run on a schedule, from an API call, or in response to an event, and it runs on Anthropic's web infrastructure. So,
0:31
that's awesome. So, you can call a routine from an API, you can have GitHub events trigger it, or they can be scheduled, which are like the scheduled automations that we already have, but now they run on the web. So, you really can create these from anywhere. You can do it right here as a scheduled trigger to run scheduled remote agents, which is in the terminal. You could also go to claude.ai/code, so you could do it on the web. And right here, you see I have three web-based routines right here. Or, what I'm going to be showing you guys today is just doing it in the desktop app, because right here, if I go to my scheduled tasks, you can see that I've got some, like these four that are local, and then I've got these four that
1:02
are running inside of a GitHub repository. So, these are the remote ones. If I go up here and click on a new task, this is where we can set up a new local task or a new remote task. It's very similar. You set up the name, you set up what Claude should do, and this is the actual prompt. So, I'll talk more about that in a sec. But, then you would configure your model, your repository, and your cloud environment. You set the cadence, hourly, daily, weekdays. I think the minimum is once an hour. Like, you couldn't go like every 10 minutes or something. But, still, not bad at all. This is where you could configure all of your connectors. So, if you need to
1:33
connect Slack or Gmail or, you know, whatever it is, you can connect them right here. But, you can also just do your regular API endpoints with your API keys. And then, of course, you've got your permissions, so you can choose how Claude should be acting. Now, the one thing about these are these are meant to be a one-shot prompt. You're not around. So, you probably want to make sure that it doesn't ever have to stop and ask you questions. Otherwise, what's the point of the automation? So, like I said, there's tons of things to dive into here, and I'm not going to try to bore you guys. But, some of this is really important, because when I first got this set up, my automations weren't just
2:03
migrating over and working. So, I'm going to tell you guys the issues that I ran into, and hopefully answer everything that you need to know so that you won't have to go into the comments and ask these common questions. I can just answer them right here for you. So, let me just, first of all, real quick show you guys what I tested out. The first thing I wanted to test out is if I came in here and I created a new routine for just shooting a message to my ClickUp. Obviously, that's not any value, but I just wanted to see how it worked. Because what I wanted to do is see if I could do this without adding my connector of ClickUp. And I was able to actually get this to fire off, but it
2:34
didn't work right away. So, let me show you guys what I ran into. So, the way that this works is you need a GitHub repository to to sync it to in order for this to actually run. So, it's going to clone my Herc2 project right here in the web. It's going to be able to read my claude.md, it's going to be able to read my scripts and my skills. And then, after it finishes the job, it basically just destroys that little cloud GitHub clone. But, as you guys know, you don't push your secrets into GitHub, because if you see here, my my Herc2 project, this is my .env file with all of my API keys, and this is listed in the gitignore, which basically says, "Hey,
3:05
when you push to GitHub, you don't include these files." So, what that means is, in here, if this is only looking at your GitHub repo, there's no .env. So, how do you get your API keys into this routine that runs on the web? Well, what you do is, inside of this scheduled task, you have a cloud environment. So, if I click on this one, you can see this one is called Nate Herc Cloud. So, if I open up the settings, what do you see? You have the name of this cloud environment, you have the network access, and you have environment variables. So, right here is where I put in my YouTube API key, my ClickUp API
3:37
key, any of the other API keys that I need to give this cloud environment access to. And then, the other thing you have to do is you have to look at the access levels, because right here, you can see that this one is on full, but by default, this will be on trusted, I believe. And that means you can only download packages from verified sources from Anthropic. And when we talk about this later, I'll have a link, which you can go see all of them. You could even do custom if you wanted to allow a specific domains that aren't on that list. But, in order for ClickUp to work in this case, I had to go on full, because when I went on trusted, it said, "Hey, we can't actually do that." But,
4:07
when I changed this to full, it let me send a message to my ClickUp. And that is how I got this message right here that says, "Just testing that the remote tasks work and the credentials work." So, basically, when these run, whatever you have here as your instructions is what gets prompted. And that's exactly the same way that the scheduled tasks locally work. So, right here, you can see I say, "Send a message in the internal ClickUp channel." And right here, the actual thing that it says was, "Send a message in the internal ClickUp channel." So, think of a scheduled task or a routine as you basically typing in a prompt, and then someone coming in to
4:38
your laptop and typing it in for you. So, it's the exact same type of interaction as you talking to CloudCode. But, that's why, once again, you want to make sure it's specific enough so that it can basically one-shot it. Okay, so let's dig a little deeper now. What I tried to do is I did another one, which I wanted it to be able to use the YouTube Data API in order to grab some YouTube comments for me and give me a little analysis in, you know, ClickUp or whatever. So, this is the prompt I said, right? "Analyze 50 of my most recent comments from YouTube and give me a quick bullet rundown. My YouTube API key is available as an environment variable.
5:09
Use it directly from the environment. Don't look for a .env." Because what happens is, in your repo, right? So, in this Herc2 project, um when I normally run this, it grabs all my API keys from the .env. And maybe it reads the claude.md and realizes that's where a lot of those live. So, by default, it's maybe going to try to look in the .env, and it's not going to be smart enough to figure out. And so, for ClickUp, it was fine. It figured it out. But, for some reason, with this YouTube one, it didn't. So, I had to explicitly tell it, "Hey, look in the environment variable rather than in the .env." So,
5:40
you can see this first time I ran it, 12:41, I didn't say that, and it couldn't do it. It said like, "Hey, I can't find that. I'm getting an error." And I even tried to tell it here, and it still didn't work. But, then, on this most recent run, when I updated the prompt a little bit, it was able to fetch it right away using the API key, and now I have a remote, you know, routine that would work. Obviously, I need to update this. I'm going to migrate over my other automations, but this was just for testing purposes. And then, another one that I do is I have some automations here, which basically opens up a browser using Playwright CLI,
6:12
and it does some stuff in my school community, because there's no publicly accessible API. We've kind of figured out a way to automate it without using browser. I'm not really going to dive into that right now. But, what I wanted to tell you guys about that is I tried to basically move over this School Wins Engagement post, or sorry, automation, into a remote session. So, I copied the exact same prompt that was in my regular scheduled task, and then I just added this little snippet at the end. But, what happened is this wasn't working, because it basically said, "Hey, you know, like when you do this, it spins up a browser, but there's no cookies,
6:42
because all of this is running remotely, and all I have to look at is the GitHub repo. I can't look at the local, you know, cookies that we've used in the last couple sessions of this automation." And so, it doesn't seem like this would work, because, once again, it has no access to that stuff. So, if I wanted to do an automation like this, I would have to use um an endpoint that takes authentication in the form of like actual cookies or header or, you know, like an API key, because every single one of these runs is going to be stateless, and after the run, the GitHub clone just gets deleted. Now, the
7:12
exception of that is if the automation is changing something in your code base or doing a review. If it does do that, it will create a new branch for you, or it will give you some sort of output and not just delete everything that it just did. But, for an automation like this, it would just delete it. But, hopefully, after you guys have seen those examples, you now have the ability to come in and, you know, make some changes if you need in order to make sure that your automations are running. And what I mean by that is you understand this should be a very specific prompt. This is how you change the model. You have to have a GitHub repo. You can change the settings
7:42
for your cloud environments right here. You set the schedule, you add any connectors you might need, which would honestly be a little easier if you added just like a Slack connector. And then, you can set your permissions here. Now, the other thing to be aware of is you do have limits. So, if I come over here to my settings, you can see if I go to my usage, we have our regular session limits, our model limits, but for additional features, we have daily included routine runs. And I haven't run any yet on the actual schedule. I've just been testing them. Um but, we are at zero for 15. So, I could only have 15
8:12
automations running with routines per day, because I'm on the max $200 a month plan. Your limits would be less if you're on pro, I think maybe three or maybe five. I'll I have that information later on, but just something to keep in mind. All right, so let's just dive into a little bit more of the details here that may answer some questions you guys have. I think it's pretty clear at this point what it is. Um I'm going to give you guys this entire doc, as well as anything else I've talked about in my free school community. The link for that is down in the description. So, some of the stuff I might not cover. If you want to read more about it, then just go
8:42
ahead and grab that free resource. So, we know what it is. I think we know how it works, right? Like, you define a routine, which is a prompt, you connect a GitHub repo, you could also trigger it by APIs or by a GitHub action, and then you can connect your connectors. And basically, it acts as you talking to your own CloudCode. Because of the fact that this is working off of a cloned repo, it's going to read the claude.md file automatically every time. So, if you have a massive project, like a Herc2 project, for example, with tons of context and tons of stuff, maybe you
9:12
don't want to put that repo into the cloud to be a routine routine run, because there's a lot of context in that claude.md and in that whole GitHub repo that might not matter for this automation. So, maybe you're better off setting up a specific GitHub repo per scheduled routine. But, of course, claude.md best practices putting in the information that's important, because this stuff is going to drain your CloudCode session limits the exact same way as it would if you were open up in CloudCode just talking to it. So, once
9:42
again, three trigger types, schedule, API, which I think is really cool. You could have a different automation make a post request to some sort of routine. And then, of course, GitHub. So, you can have it automatically fire off, kind of on a webhook, based on new PRs, new pushes, new issues, new releases, things like that. So, how does this compare to what already exists? We have routines, which is the new feature. We have desktop scheduled tasks, and then we have something like just a {slash} loop command. So, routines run on Anthropic's cloud, and these other two run on your machine. Do you need the machine on? No, for
10:13
routines, that's huge. But, for desktop scheduled tasks and for loop, you need your machine on. Do you need a session open? No, that's the same across all three. Do they survive across restarts? The first two do, but loop does not. That has to live within a specific session. Local file access? No, for the routines because it works off of the GitHub repo. And for the next two, yes, you have local file access. Permission prompts with routines, it's fully autonomous. And for these two, they are configurable. And then the minimum interval, routines is 1 hour, and these two are both could go every minute if
10:43
you want. Okay, so let's talk about the environments. Obviously, your {dot} env is get ignored unless you push it into the GitHub repo. You know, ultimately, if you push it into a private repo, you're probably okay, but you want to be really really careful because then, you know, there's history there, and if other people, you know, end up collaborating on it, you just don't want to do that. So, you want to put your API keys in the environment variable, like I showed you guys earlier. You want to look at the network access, whether that is full or trusted or none or custom, and potentially some setup scripts. So, that's not something I showed you guys yet. If you're creating a new remote
11:14
task, you can do a setup script, which is basically just a script that will run when this new session fires up before Claude Code launches. So, if you need to install any packages or anything like that. Okay, so what's difference between trusted and full? So, trusted only reaches the known vetted services from Anthropic, which I thought I linked right here, but I just linked it there. This basically shows you all of the different domains that are allowed. So, right here you can see we've got Anthropic services, we've got version control, we've also got some cloud platforms like Google, stuff like this right here. These are the ones that are
11:45
kind of already verified. So, what is the risk of going on full? Well, if Claude reads malicious content during a run, then it theoretically could be tricked into sending data to an external server. And with trusted, that outbound request would get blocked. Now, practical risk for private repos where you control the inputs is very low, but I definitely just wanted to at least acknowledge that. So, connectors, this is different than just adding your API key. This is more of like the connectors you would add to your actual Claude chat or like Claude co-work, where you would OAuth into like Slack or ClickUp or
12:15
stuff like that. Here are some security details. I'm not going to go super deep into this. You could also do some more research and download this doc. But, of course, there are some things to be thinking about, like your API triggers or what's going on with your GitHub repos and the branches because, once again, everything is going to be running as you. So, if you're not testing out these routines before you just kind of send them off every hour or something, you just have to be thinking about what could happen without permissions and, you know, stuff like that. Limits and quotas, so it looks like on Pro, you can have five runs a day. On Max, you can have 15 runs a day. And on Team and
12:46
Enterprise, you can have 25 routines a day. If you hit the cap, the orgs with extra usage enabled can exceed it on metered overage. And then we have the minimum scheduled interval, which is 1 hour. And there are also resource limits. So, every one of these routines in the cloud runs on four vCPUs, 16 gigs of RAM, and 30 gigs of disk space. So, once again, just be thinking about are you putting an absolutely massive GitHub repo up into the cloud right now to run that could just be wasting resources for no reason. So, what persists versus what gets destroyed? The Claude branches gets
13:17
pushed to your GitHub repo, and the session also stays. So, as you saw, if I came into here and I looked at all of these tasks, I could see all of the past runs, and I could go look at them to see if something's going wrong. But, the actual Claude environment that gets cloned will be destroyed. Basically, the rule of thumb here is if something's local or if Claude Code can't reach it in your GitHub repo or via an API, then it won't work. We already talked a little bit about writing good prompts, but you definitely want them to be more specific. For example, with my um scheduled automation here, this is much
13:48
more specific, right? I have a skill that I wanted to run. I give it the order of operations. But, something more like this YouTube comments one, this is not what you'd want to put in there unless you were defining a skill to just let it run. Because, once again, this is supposed to be a one-shot prompt. So, you wanted to make sure it gets it right on the first try. Okay, so why is this so exciting? And why does this beat normal automation? Because, we are actually keeping the agentic framework. If you If you know what I talk about the WAT framework, where we have workflows and agents and tools, when we actually push those automations
14:19
to the cloud, and it's just a, you know, sort of a Python script, we're losing the agentic piece. We're only sending off really the tools and the workflow. But, in this case, we're keeping the W A and the T all running together because the agent is looking at the, you know, Claude MD. It's looking at its scripts, and it's figuring out what to do. And if it runs into errors mid-run, it will self-correct. And if you configure it the right way, it will be able to sort of like leave a memory trail, and it can leave like, you know, updates even though each run is stateless, you can still have them kind
14:49
of continuously get better. And real quick, let's speed run through these common questions. Do I need to know cron syntax? No, you just can schedule in natural language, super easy. Can it access my local files? Nope, it only gets what's in your GitHub repo or your APIs. What model does it use? You can choose any of the models as you guys saw. Can you watch it work in real time? Yes, you can hit run now, and then you can obviously watch it go right there, same way you would in Claude. You can even talk to it after it's done or interrupt it and then continue going. Can it use my MCP service? Yes, that is what the connectors are.
15:20
Can teammates use my routines? Nope, these belong to your individual account. You might be able to share those if you were on a team plan, but I haven't actually yet tested that myself. What's the cost? It's just your normal subscription usage, so keep that in mind. What happens when a run fails? Every one of them will be stored in your history, so you can go see why they failed. You could maybe even have it at the end of every single routine say, "Hey, if this does fail, just shoot me a Slack message to let me know." Things like that. And can I test a run before going live? Yes, in fact, you should test it multiple times before it goes live. You just go into the routine, you
15:50
hit run now, and then it will pop up as running, and then you just watch it, you know, watch it go through its order of operations, and you can inject, and you can help it correct itself so that you have confidence that once it shoots off the prompt next time, you won't have to get in the way at all. But anyways, that's going to do it for this one. I hope that those tips and some of the examples that I showed you were helpful, and now you can go off and try to migrate your scheduled tasks or any other automations that you've been meaning to build into these web-based routines and not have to keep your hardware on. So, if you enjoyed the video or you learned something new,
16:20
please give it a like. It helps me out a ton. And as always, I appreciate you guys making it to the end of the video. I'll see you on the next one. Thanks, everyone.