<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Do Anything AI — Blog</title><description>Notes from Chris on shipping AI products, LLM weirdness, and running a one-person studio.</description><link>https://doanythingai.com/</link><language>en</language><copyright>Chris / Do Anything AI</copyright><item><title>Most of AI engineering is glue</title><link>https://doanythingai.com/blog/most-of-ai-engineering-is-glue/</link><guid isPermaLink="true">https://doanythingai.com/blog/most-of-ai-engineering-is-glue/</guid><description>Every time I ship something with an LLM in it, I end up writing less AI code than I expected and more plumbing than I planned for.</description><pubDate>Sat, 18 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I opened a pull request this morning for a feature that took me three days. The PR has 640 lines changed. Forty of them call the LLM. The other 600 are the reason the thing actually works in production.&lt;/p&gt;
&lt;p&gt;That ratio is not unusual. That ratio is basically &lt;em&gt;every&lt;/em&gt; AI feature I have ever shipped, and it&apos;s been gnawing at me how different it is from the way &amp;quot;AI engineering&amp;quot; gets talked about online. So I want to write down the ratio, why it refuses to budge, and what it means if you&apos;re hiring or applying for these jobs.&lt;/p&gt;
&lt;h2&gt;The feature I shipped this morning&lt;/h2&gt;
&lt;p&gt;User pastes in a long piece of text. The app summarises it, pulls out the entities, assigns a category, stores the whole thing. Simple product, one screen, one input.&lt;/p&gt;
&lt;p&gt;The LLM code is forty lines. A prompt, a JSON schema, a retry loop, a parse, some small validation. That part took me about ninety minutes, most of which was wording the prompt.&lt;/p&gt;
&lt;p&gt;The other 600 lines are, and I&apos;m going to list them so the shape is visible: authentication, rate limiting per user, the job queue, the worker that pulls from the queue, the storage schema, idempotency keys so retries don&apos;t double-charge the user, per-request cost tracking, error states for when the text is too long or the model times out, the admin tool I built for myself to inspect runs that went sideways, the tiny dashboard that tells me whether today&apos;s cost is tracking where it should, tests for the non-LLM parts, and observability hooks for when something goes wrong at 3am.&lt;/p&gt;
&lt;p&gt;Nothing in that list is &amp;quot;AI.&amp;quot; All of it is how software has always worked. The AI is a function call in the middle.&lt;/p&gt;
&lt;h2&gt;Why the ratio never inverts&lt;/h2&gt;
&lt;p&gt;When I started shipping LLM features four years ago, I assumed the AI layer would grow over time and the plumbing would shrink. The opposite has happened, consistently.&lt;/p&gt;
&lt;p&gt;Every time a better model comes out, I delete AI code. The GPT-3-era version of the feature I shipped this morning had custom chunking, a reranker, a manual eval rig, three different prompts that got composed together. The modern version is five lines of structured output, one prompt, one schema, done. The model got reliable enough that the complexity moved out of my codebase.&lt;/p&gt;
&lt;p&gt;But it didn&apos;t disappear. It moved &lt;em&gt;into the surrounding system&lt;/em&gt;. When the model becomes reliable, the bottleneck becomes everything else. How you queue the work. How you recover from failures. How you let users see what&apos;s happening. How you don&apos;t burn through your API budget in a week because one user found a way to paste in a 400k token input.&lt;/p&gt;
&lt;p&gt;None of that is AI. It&apos;s just engineering. And because the AI layer keeps getting simpler, the glue layer keeps getting proportionally bigger. Better models make the ratio worse, not better, if you measure &amp;quot;worse&amp;quot; as &amp;quot;percentage of code that is actually about the model.&amp;quot;&lt;/p&gt;
&lt;h2&gt;The moat is in the 600 lines&lt;/h2&gt;
&lt;p&gt;This is the part I want people to sit with, because it&apos;s the part that changes how you should spend your time.&lt;/p&gt;
&lt;p&gt;The forty lines that call the LLM? Anyone can write those. They&apos;re a copy of the API docs. The moat of an AI product isn&apos;t in the prompt. It&apos;s in the idempotency, the cost controls, the error UX, the queue, the admin tooling. The stuff that makes the product not fall over when real users hit it at real scale.&lt;/p&gt;
&lt;p&gt;A competitor who studies your prompt can replicate the prompt in an afternoon. A competitor who has to replicate your whole production system — the retry logic, the cost guardrails, the admin UI you built so your support team can answer a refund email — is going to take months, and might not even bother. The glue is where the durable advantage lives.&lt;/p&gt;
&lt;p&gt;If you understand this, you spend less time tinkering with prompts and more time building the boring infrastructure. Which is exactly backwards from the Twitter version of the job.&lt;/p&gt;
&lt;h2&gt;Who you should hire (and who you are)&lt;/h2&gt;
&lt;p&gt;Every month I see roles posted for &amp;quot;AI engineer&amp;quot; where, if you read the actual job description, 85% of the work is queues, databases, retries, dashboards, and frontends. And then the company is surprised when the research-heavy candidate they hire is disappointed by the job, and the company is disappointed by the candidate.&lt;/p&gt;
&lt;p&gt;If you&apos;re hiring to ship AI products, you are hiring a product engineer who happens to be comfortable calling a model. Not a researcher. Not a fine-tuning specialist. Not someone whose last project was comparing chunking strategies on a 200-page PDF.&lt;/p&gt;
&lt;p&gt;I&apos;d rather hire an engineer who has taken one boring SaaS from zero to a thousand users than an engineer who has fine-tuned five models but never deployed anything. The former will pick up the prompt layer in a week. The latter will spend six months figuring out why their jobs aren&apos;t getting processed, and by the time they solve it, the feature will have missed its launch.&lt;/p&gt;
&lt;p&gt;The flip side is also worth saying out loud: if &lt;em&gt;you&apos;re&lt;/em&gt; applying for AI engineer roles and you came from a non-AI background, you are massively underestimating how much of the job is stuff you already know how to do. Apply.&lt;/p&gt;
&lt;h2&gt;What this isn&apos;t&lt;/h2&gt;
&lt;p&gt;I want to say this before anyone reaches for the comments.&lt;/p&gt;
&lt;p&gt;I&apos;m not saying the LLM layer is trivial. Getting a prompt to be reliable across real user inputs is genuinely hard. Evals are genuinely hard. Latency-cost-quality trade-offs are genuinely hard. If you&apos;re building a product where the model is the product — a coding agent, say, or a legal-search engine — that ratio tips much further toward AI work than what I&apos;m describing.&lt;/p&gt;
&lt;p&gt;But for the majority of &amp;quot;AI features&amp;quot; that are shipping inside normal software products right now, the ratio is 10% model and 90% everything else. The sooner you embrace that the sooner you ship, and the sooner you ship the sooner you find out whether the model actually solved the problem you thought it did.&lt;/p&gt;
&lt;p&gt;Learn the 10%. Learn the 90%. People who only know the 10% can make a prototype. People who know both can ship a product.&lt;/p&gt;
</content:encoded><category>llms</category><category>engineering</category><author>chris@doanythingai.com (Chris)</author></item><item><title>Programmatic SEO still works</title><link>https://doanythingai.com/blog/programmatic-seo-still-works/</link><guid isPermaLink="true">https://doanythingai.com/blog/programmatic-seo-still-works/</guid><description>Every six months someone declares pSEO dead. Every six months my pSEO sites keep growing. Here&apos;s what actually changed.</description><pubDate>Fri, 17 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In March of 2024 I watched a pSEO site of mine go from 300k monthly pageviews to 14k in nine days. Same site, same pages, same content. Google just stopped showing it. It never recovered.&lt;/p&gt;
&lt;p&gt;A few months later, a different pSEO site I run hit a million pageviews for the first time. Same framework, same deploy pipeline, same &amp;quot;programmatic&amp;quot; approach to generating the pages.&lt;/p&gt;
&lt;p&gt;Both are true at the same time, and the difference between them is the most useful thing I&apos;ve learned about SEO in the last three years. This is me trying to spell it out, because every time Google runs a big update someone announces programmatic SEO is dead on LinkedIn and I want to be able to link to this post instead of typing the same rebuttal.&lt;/p&gt;
&lt;h2&gt;The specific kind of pSEO that died&lt;/h2&gt;
&lt;p&gt;Not all of it. The version that died is the version that deserves to be called dead.&lt;/p&gt;
&lt;p&gt;If you scraped Wikipedia, ran it through an LLM to &amp;quot;rephrase,&amp;quot; dropped the output into a template with 50,000 thin pages, and waited for traffic — that site is dead. Google got extremely good at spotting that exact shape around early 2023 and they&apos;ve only sharpened since. The shape is what they can detect: enormous page count, tiny information-per-page ratio, no signal that any human ever touched the content.&lt;/p&gt;
&lt;p&gt;My site that dropped from 300k to 14k looked like that. Not because I&apos;d deliberately farmed it, but because over three years it had drifted in that direction. I&apos;d kept adding pages faster than I was deepening them. Google noticed.&lt;/p&gt;
&lt;p&gt;Google doesn&apos;t hate scale. It hates thin. The two get conflated in every &amp;quot;pSEO is dead&amp;quot; post I read, and they&apos;re completely different things.&lt;/p&gt;
&lt;h2&gt;The one test that predicts which sites survive&lt;/h2&gt;
&lt;p&gt;Here&apos;s the test that would have saved me that 300k site if I&apos;d known it in 2021:&lt;/p&gt;
&lt;p&gt;Would I be proud of this page if I&apos;d had to write it by hand?&lt;/p&gt;
&lt;p&gt;If the answer is yes — if a human could look at one of my pages and think &amp;quot;oh, someone who knows what they&apos;re talking about wrote this&amp;quot; — the page survives updates. If the answer is &amp;quot;uh, maybe if you squint,&amp;quot; it&apos;s at risk. If the answer is &amp;quot;no, this only exists because templates made it exist,&amp;quot; it&apos;s going to get nuked sooner or later.&lt;/p&gt;
&lt;p&gt;Apply that test to KeqingMains, which is technically a pSEO site with thousands of pages. Each page is a character guide, a weapon guide, a mechanic explainer. Each one exists because somebody actually wanted that specific answer, and somebody who cared wrote or reviewed it. I&apos;d be proud to put my name on any of them. The site has been through every major Google update since 2021 and it just keeps growing.&lt;/p&gt;
&lt;p&gt;The word &amp;quot;programmatic&amp;quot; describes how the pages get produced. It does not describe whether the pages are worth anything. Those are separate axes, and if you flatten them into one thing you&apos;ll draw the wrong conclusions about what ranks.&lt;/p&gt;
&lt;h2&gt;What I actually ship differently now&lt;/h2&gt;
&lt;p&gt;Three things changed in how I build pSEO sites after that 300k-to-14k afternoon.&lt;/p&gt;
&lt;p&gt;First, I cite actual data. Numbers, quotes, sources, screenshots. Not &amp;quot;studies show&amp;quot; but &amp;quot;in the March 2024 patch notes, the attack scaling changed from 1.2x to 1.4x.&amp;quot; Pages that know a specific thing the rest of the internet doesn&apos;t tend to survive the updates that flatten the rephrasing farms.&lt;/p&gt;
&lt;p&gt;Second, I make authorship obvious. Bylines on every page. A real about page. A consistent voice across the site. If a reader lands and asks &amp;quot;who wrote this and why should I trust them&amp;quot; and the answer is unclear, AI Overviews won&apos;t cite me and Google will eventually stop showing me. This is newer — it wasn&apos;t a ranking factor four years ago, and now it very much is.&lt;/p&gt;
&lt;p&gt;Third, I cut ruthlessly. I used to publish everything that could theoretically rank. Now I unpublish more than I publish. If a page isn&apos;t in the top ten for anything after six months, I either rewrite it until it deserves to be, or I delete it. My sites are smaller than they used to be and they earn more.&lt;/p&gt;
&lt;h2&gt;The part where I eat crow&lt;/h2&gt;
&lt;p&gt;I&apos;ve had other pSEO sites get hit in the last two years. It&apos;s not a hypothetical. Every time, the common thread was the same: I was pushing the definition of &amp;quot;useful&amp;quot; a little further than it wanted to go. I was making pages that were technically correct, technically answering a question, but that I couldn&apos;t have written with a straight face by hand.&lt;/p&gt;
&lt;p&gt;When I failed the proud-by-hand test and shipped anyway, Google eventually caught up. Every time. No exceptions.&lt;/p&gt;
&lt;p&gt;That test is honestly annoying to live by, because it caps your output. You can&apos;t ship 50k pages if you&apos;d only have been proud of 5k of them. But the 5k will still be earning in three years, and the 50k won&apos;t.&lt;/p&gt;
&lt;h2&gt;The bigger point, if you&apos;re deciding what to work on&lt;/h2&gt;
&lt;p&gt;Programmatic SEO as a free-money glitch is finished. Nobody&apos;s going to 5x a site by spinning up 100k template pages anymore, and if anyone&apos;s telling you otherwise they&apos;re selling you a course.&lt;/p&gt;
&lt;p&gt;Programmatic SEO as a legitimate way to structure a content site around a specific problem domain is alive, growing, and maybe more valuable than it&apos;s ever been. The glitchy version got cleared out of the SERPs, which means there&apos;s more oxygen for the real version.&lt;/p&gt;
&lt;p&gt;I&apos;m still using it. I&apos;ll still be using it in 2027. I just apply the proud-by-hand test to every page before it ships now, and the 300k-to-14k afternoon hasn&apos;t repeated.&lt;/p&gt;
</content:encoded><category>seo</category><category>pseo</category><author>chris@doanythingai.com (Chris)</author></item><item><title>Running twenty sites solo</title><link>https://doanythingai.com/blog/running-twenty-sites-solo/</link><guid isPermaLink="true">https://doanythingai.com/blog/running-twenty-sites-solo/</guid><description>People ask how I keep 20 sites running without a team. The honest answer is that most of them don&apos;t need me.</description><pubDate>Thu, 16 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A friend came over last week and watched me close my laptop at 5pm. He said &amp;quot;wait, that&apos;s it? You run twenty sites.&amp;quot; It sounded like an accusation.&lt;/p&gt;
&lt;p&gt;I get asked this at least once a month. How do you maintain twenty sites by yourself? The honest answer is that I don&apos;t, really. Most of them don&apos;t need maintaining. Which is what I want to tell you about, because the part that makes that possible is a rule I wish someone had drilled into me five years ago.&lt;/p&gt;
&lt;h2&gt;The rule that made everything else possible&lt;/h2&gt;
&lt;p&gt;If a project needs me to check on it weekly, it&apos;s a bad project.&lt;/p&gt;
&lt;p&gt;That&apos;s it. That&apos;s the whole rule. Every time I find myself in a Monday routine of &amp;quot;let me go make sure X is still working&amp;quot; — whether it&apos;s a dashboard I built, a content pipeline I wrote, a dependency I&apos;m nervous about — the project is failing. Not slowly. Failing.&lt;/p&gt;
&lt;p&gt;The options at that point are three, and only three: automate what I keep checking, rebuild the thing until it doesn&apos;t need checking, or kill it.&lt;/p&gt;
&lt;p&gt;Two years ago I had a site that scraped a gaming leaderboard and republished it with commentary. Decent traffic, a few hundred bucks a month. I was checking it every Monday because the source site kept changing their HTML. Every few weeks something would break and I&apos;d spend an afternoon chasing it. I finally added up the hours and the revenue and realised I was earning $4 an hour on this thing when you count the babysitting. Killed it the next day. Haven&apos;t missed it once.&lt;/p&gt;
&lt;p&gt;The ones I didn&apos;t kill — the twenty I still run — all pass this test. They sit there quietly. I can leave them alone for a month, and they&apos;ll be fine a month later.&lt;/p&gt;
&lt;h2&gt;What &amp;quot;sits there quietly&amp;quot; actually means&lt;/h2&gt;
&lt;p&gt;Boring is the whole trick. Every new project I start now, I ask: can this be boring?&lt;/p&gt;
&lt;p&gt;Static pages wherever possible. No user accounts unless the product literally cannot exist without them (turns out, usually it can). No cron jobs I can&apos;t explain in one sentence. No third-party integrations that can fail silently. When a dependency upgrades, I want to know within seconds if the site broke, not three weeks later when someone emails.&lt;/p&gt;
&lt;p&gt;The corollary is: I build almost nothing in isolation anymore. Every site I ship uses the same bones. Astro for the pages, Cloudflare for hosting and DNS, Markdown or a JSON file for content, Postgres if I need persistence. That&apos;s it. Four pieces. If a new project doesn&apos;t fit this stack, I&apos;ll often just not build it, because the long-term cost of a bespoke stack is higher than the short-term win of the feature.&lt;/p&gt;
&lt;p&gt;Standardising was the biggest single productivity change I&apos;ve made as a solo operator. When every project has the same bones, the bones stop costing me cycles.&lt;/p&gt;
&lt;h2&gt;The monitoring I actually have&lt;/h2&gt;
&lt;p&gt;You would assume, running twenty sites, I&apos;d have an elaborate observability setup. Grafana dashboards, alert channels, a PagerDuty rotation of one.&lt;/p&gt;
&lt;p&gt;I don&apos;t. I have UptimeRobot hitting the important URLs every five minutes, and that&apos;s the whole monitoring stack. If something breaks badly enough to matter, it&apos;ll show up in revenue or traffic within a day, and I&apos;ll see it. If it&apos;s too subtle to show up that way, it&apos;s also too subtle to be worth waking up for.&lt;/p&gt;
&lt;p&gt;Real engineers hate this. That&apos;s fine. The ops practice that would be correct at a company of fifty people is comic overkill for a company of one. I&apos;d rather spend the hours I&apos;m not spending on dashboards on writing the next thing.&lt;/p&gt;
&lt;h2&gt;What actually eats my time&lt;/h2&gt;
&lt;p&gt;Maintenance doesn&apos;t. The two things that eat my week are:&lt;/p&gt;
&lt;p&gt;Deciding what to build next. This is harder than it sounds when you already have twenty things you could be pushing on. I wrote more about this in another post.&lt;/p&gt;
&lt;p&gt;And then actually building it. Shipping the next project, or the next big feature for one of the flagship sites.&lt;/p&gt;
&lt;p&gt;I probably spend 15% of a typical week keeping old stuff alive, 20% on the blog and the consulting, and the rest on whatever the next thing is. If that ratio tips the other way for more than a few weeks, one of the old projects has started failing the weekly-check rule, and I need to fix it or kill it. There&apos;s no middle ground.&lt;/p&gt;
&lt;h2&gt;The takeaway, if you&apos;re managing more than one thing&lt;/h2&gt;
&lt;p&gt;Apply the weekly-check test to every project you run. Be honest about which ones demand regular attention. Then fix them until they don&apos;t, or let them go. You can run a portfolio ten times the size of mine if none of the individual items are loud, and you can run zero sites if even one of them is loud enough to take over your week.&lt;/p&gt;
&lt;p&gt;Twenty quiet sites beat three loud ones, every time. The hard part is being willing to kill the loud ones when they stop being worth the noise.&lt;/p&gt;
</content:encoded><category>indie-hacking</category><category>infra</category><author>chris@doanythingai.com (Chris)</author></item><item><title>Pricing a one-hour consult</title><link>https://doanythingai.com/blog/pricing-a-one-hour-consult/</link><guid isPermaLink="true">https://doanythingai.com/blog/pricing-a-one-hour-consult/</guid><description>I charge a flat rate for one hour. No retainers, no scope creep. Here&apos;s how I landed on the number.</description><pubDate>Wed, 15 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A guy named Jake emailed me last month asking how I priced my consult. I wrote him back a long reply, realised halfway through I&apos;d never actually written this down anywhere, and here we are.&lt;/p&gt;
&lt;p&gt;I sell one-hour consults and nothing else. One call, one invoice, done. No discovery call, no retainer, no &amp;quot;let me send over a statement of work.&amp;quot; The price on the page is the price.&lt;/p&gt;
&lt;p&gt;People want to know how I landed on the number. Here&apos;s the useful part, and then the part nobody tells you.&lt;/p&gt;
&lt;h2&gt;The morning-after test&lt;/h2&gt;
&lt;p&gt;Here&apos;s the trick that did more than anything to get my pricing right.&lt;/p&gt;
&lt;p&gt;Whatever you&apos;re charging, imagine you just hung up on a consult. It went fine. Client was friendly, you helped, nobody&apos;s unhappy. It&apos;s 9am the next day. You&apos;re making coffee. How do you feel?&lt;/p&gt;
&lt;p&gt;If you feel fine, you&apos;re priced about right. If you feel like you&apos;d rather have been shipping your own thing, your price is too low. Not because you&apos;re greedy. Because the money has to cover both the hour you spent &lt;em&gt;and&lt;/em&gt; the hour you didn&apos;t spend on your own work. Opportunity cost is real, and it shows up in your gut before it shows up on a spreadsheet.&lt;/p&gt;
&lt;p&gt;I moved my price three times on this test alone. Each time it was because a few mornings in a row the answer was &amp;quot;I would rather have shipped.&amp;quot; The number went from $500 to $750 to what it is now, and demand didn&apos;t change once. The only thing that changed was how I felt on Tuesday mornings.&lt;/p&gt;
&lt;p&gt;Better than a market survey. Better than asking friends. Better than a conversion rate. Your gut updates faster than any of that.&lt;/p&gt;
&lt;h2&gt;Why I stopped letting the price climb&lt;/h2&gt;
&lt;p&gt;There&apos;s a ceiling too, and I hit it by accident.&lt;/p&gt;
&lt;p&gt;Past a certain point, people who booked started expecting a deliverable. They&apos;d ask for a memo, a follow-up, a Slack thread. Anything tangible to justify what they&apos;d paid. That&apos;s a different product. That&apos;s a mini-consulting engagement, not a consult. Different margins, different headaches, different version of me to run it.&lt;/p&gt;
&lt;p&gt;I didn&apos;t want that business. So now the ceiling is &amp;quot;the highest I can charge without the client quietly expecting more than sixty minutes of talking.&amp;quot; Below that, they&apos;re buying my time and leaving happy. Above it, they&apos;re buying an outcome I never promised, and we&apos;re going to end up in an awkward email thread.&lt;/p&gt;
&lt;p&gt;Know there&apos;s a ceiling. Don&apos;t blow past it by accident.&lt;/p&gt;
&lt;h2&gt;The part nobody writes about&lt;/h2&gt;
&lt;p&gt;The price isn&apos;t the hard part. Holding the scope is.&lt;/p&gt;
&lt;p&gt;Every person who books will try — usually without realising it — to turn an hour into three. They&apos;ll email follow-up questions the next day. They&apos;ll ask you to pre-read a deck. They&apos;ll reschedule twice and show up unprepared. They&apos;ll DM you a week later with &amp;quot;quick question.&amp;quot; If you say yes to any of it, the real hourly rate plummets and you&apos;ll start to resent the whole thing.&lt;/p&gt;
&lt;p&gt;The fix is being blunt before they ever book. My page literally says: &lt;em&gt;one hour, no follow-ups, no implementation work, no pre-reads.&lt;/em&gt; About one in ten prospects reads that and bounces. Those are the ones who&apos;d have caused most of the scope problems. Good filter.&lt;/p&gt;
&lt;h2&gt;The fraud feeling, and what actually kills it&lt;/h2&gt;
&lt;p&gt;You will sometimes feel like a fraud for charging money to talk for sixty minutes. This feeling is persistent and it is wrong, but &amp;quot;it&apos;s wrong&amp;quot; isn&apos;t enough to make it go away.&lt;/p&gt;
&lt;p&gt;What worked for me was a little text file I started keeping a year in. Every time a client emailed back later with an outcome, I logged it. One founder shipped a feature she&apos;d been stuck on for four months. Another killed a project that would have cost her $80k to build. One guy pivoted his product&apos;s positioning off a single sentence I said near the end of the call.&lt;/p&gt;
&lt;p&gt;None of those outcomes were worth an hourly rate. They were worth ten times the hourly rate, at least. Once that file had six or seven entries, the fraud feeling just evaporated. The rate turned into a rounding error on the outcome, and I stopped flinching when people paid.&lt;/p&gt;
&lt;p&gt;If you&apos;re just starting out and you don&apos;t have the file yet, borrow mine: assume the price you&apos;re nervous about is already too low, and find out for real by charging it.&lt;/p&gt;
&lt;h2&gt;If you&apos;re sitting on a pricing page tonight&lt;/h2&gt;
&lt;p&gt;Pick a number that passes the morning-after test. Write the scope in one sentence. Enforce it on the booking page so the scope-creepers self-select out. Then raise the price every six months until someone emails back &amp;quot;that&apos;s too high for me&amp;quot; — not as a negotiation, just a real no.&lt;/p&gt;
&lt;p&gt;The first real no tells you where the market actually is. Until then, you&apos;re guessing. Guess high.&lt;/p&gt;
</content:encoded><category>consulting</category><category>business</category><author>chris@doanythingai.com (Chris)</author></item><item><title>Hello, this is a blog now</title><link>https://doanythingai.com/blog/hello-world/</link><guid isPermaLink="true">https://doanythingai.com/blog/hello-world/</guid><description>Quick note on why I&apos;m starting this up and what it&apos;s going to be.</description><pubDate>Tue, 14 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve been meaning to start a blog for about two years. I kept not doing it because I wasn&apos;t sure what I&apos;d write about. Today I decided the answer is &amp;quot;doesn&apos;t matter, just start.&amp;quot;&lt;/p&gt;
&lt;p&gt;So this is the start.&lt;/p&gt;
&lt;h2&gt;Why I waited so long (and why that was dumb)&lt;/h2&gt;
&lt;p&gt;The reason I didn&apos;t start sooner is the same reason most people don&apos;t start their blog: I was waiting to figure out the theme. What&apos;s my niche. What&apos;s my voice. Who&apos;s the audience. What would the first post even be.&lt;/p&gt;
&lt;p&gt;Two years of that. Meanwhile, a few people I know who just started writing in 2022 have built whole audiences off posts that were, charitably, nothing posts. The first post on almost every writing career I respect is forgettable. The reason the rest of the career exists is that the person kept going.&lt;/p&gt;
&lt;p&gt;So: if you&apos;ve been waiting for clarity before you start the thing you want to start, stop. The clarity comes from doing it for six months, not from sitting and thinking about it.&lt;/p&gt;
&lt;h2&gt;What I&apos;ll probably write about&lt;/h2&gt;
&lt;p&gt;Short notes, mostly. Stuff I figure out while building. Things I&apos;d normally tweet but want to keep in one place where I actually own the URL. Occasional longer writeups when something&apos;s worth it.&lt;/p&gt;
&lt;p&gt;A few recurring topics: whatever I&apos;m shipping that week, LLM-specific weirdness I run into, programmatic SEO (because it&apos;s what pays for half the portfolio), and short posts about running a one-person studio.&lt;/p&gt;
&lt;h2&gt;What I&apos;m not going to write&lt;/h2&gt;
&lt;p&gt;Listicles. &amp;quot;10 Ways AI Will Change X.&amp;quot; Anything a content farm could write. Posts where I don&apos;t have an opinion. If I find myself writing a summary of other people&apos;s ideas without adding anything, I&apos;ll kill the draft.&lt;/p&gt;
&lt;h2&gt;How often&lt;/h2&gt;
&lt;p&gt;Roughly daily for a while. I have a backlog. That pace will drop once I&apos;m caught up. Quality over schedule, always, but right now there&apos;s a backlog so schedule wins for a few weeks.&lt;/p&gt;
&lt;p&gt;That&apos;s the setup. More soon.&lt;/p&gt;
</content:encoded><category>meta</category><author>chris@doanythingai.com (Chris)</author></item></channel></rss>