Creating in the time of AI

Why should we create in the age of AI? How can we compete?

This post from LMNT puts a different perspective on things:

I take a little comfort in knowing that it will be impossible for “AI” tools—here on out—to differentiate between human-made and machine-generated content, thereby inevitably feeding on their own regurgitations. It’s already happening, of course.

Over the next few years, while these “AI” companies try to sort that out (and fail), and search engines try to index only the sites that are what any reasonable person would consider genuine (and fail), the best thing we can all do is just create what we want while ignoring their problems, because they’re not our problems.

We have limited time and energy. Why spend it lacing our art with poison for AI scrapers? Why spend it focusing on how to stand out on platforms that can’t differentiate human-made from AI-generated? Why spend it publishing our new creations alongside AI-generated content? Don’t spend time on these things. These are all just busywork tasks that slow us down from doing what we really want to do: create.

Depending on what you’re creating, rather than worrying about AI, you might be better off asking, So what?

So what if AI is trained on my creations? Sure, I don’t like the idea of it, but what’s the real point of creating? On one hand, the act of creation is for me. For an example of what I mean, look no further than the audio I’ve started adding to my recent blog posts. The point is not to start a ‘podcast’. The point is to make myself read my posts. When you read your posts, sometimes you realize your writing sounds strange. Also, I like to think that it’s a way to dip my toes into public speaking, a skill I want to improve on.

Sure, AI can copy my voice and my writing and steal some of my fire online. But that doesn’t affect me as a person offline.

Online is a part of my life. But it’s not my whole life.

And I agree with Louie Mantia (LMNT) that AI will soon start cannibalizing its own content, greatly hurting future quality. This is a concern I addressed on another version of my blog. It seems that generative AI is destined to best itself. So let’s stop worrying about it and instead focus on creating.

Jake LaCaze wonders if generative AI might actually put a premium on human experience and creation in the end.

Is artificial general intelligence the real benchmark for AI?

Today’s target for artificial intelligence (AI) seems to be artificial general intelligence (AGI), a technology that is competent in many areas, like humans. AI is most often highly-specialized, focusing on one area with a narrow set of tasks. This sort of AI is best-suited for specialized audiences needing specialized tasks. But with AGI, the prophets of AI can achieve their dream: AI for everyone, everywhere.

Or so the prophecy claims.

We very well may achieve AGI, but I’m skeptical we’ll get there in the next decade (which, compared to the estimates from the prophets of AI, is an eternity). The simple truth is that we still know so little about how the human brain works. Regardless of how some may feel about humans, our brains are complex machines, calculating far more than given credit.

The developers of AGI seem hellbent on replicating and/or replacing humans. But can you replicate or replace what you don’t fully understand? Supplementing and improving upon human intelligence seems a far better goal. This is why I prefer the concept of augmented intelligence over AGI1.

Anyone familiar with SMART goals knows that goals should be attainable—that’s the ‘A’ in ‘SMART’, after all. And I’m not convinced that replicating or replacing human thought and processing will be attainable in the near future.

If Gary Marcus is right—if the hype seems to be dying and the return on investment just isn’t there2—then it feels as if AGI will be attainable much, much later than the prophets of AI would have us believe.

Jake LaCaze still believes in the potential of humans.


  1. AI Should Augment Human Intelligence, Not Replace It from Harvard Business Review ↩︎

  2. The ROI on GenAI might not be so great, after all by Gary Marcus ↩︎

Douglas Rushkoff’s ‘Survival of the Richest’ shows how delusional the tech billionaires really are

I could try to tell you what exactly Douglas Rushkoff’s Survival of the Richest: Escape Fantasies of the Tech Billionaires 1 is about via a traditional book review, or I could hope that an inspired rant might give you a better idea. If you haven’t already figured it out, I’m choosing the latter route.

The tech billionaires have one simple goal: to shelter themselves from the world they’ve shaped with their outsized wealth, power, and influence. Undoing all they’ve done in the name of making true positive change via small incremental improvements that risk going unrecognised is beyond them. Simply having the option to escape this world via one avenue or another shows that the tech billionaires already live in a reality far different from the one most of us inhabit.

How many ways can one hope to escape?

Rushkoff starts by describing the struggles of those tech billionaires outfitting their doomsday bunkers for the coming apocalypse2. A lot of thought goes into such preparation. Location, supplies, air filtration. The tech billionaires are also looking into how to motivate their security to protect them when the markets collapse and currency is worthless.

Others hope to one day leave the earth behind. They plan to colonize Mars and start over new, where they’ll stand to gain even more as the early adopters of a fresh society.

But what about those tech billionaires who can’t escape in these ways? What if they have no choice but to stay on this boring earth, and what if everything doesn’t go to absolute hell and they can’t justify running away to their bunkers in Hawaii or New Zealand?

That’s where digital escapes like the Metaverse come into play. Who needs Mars or a doomsday bunker when they can build a digital world to replace the physical. You can always buy digital real estate and rent it out to supplement any losses realised from your real estate in the unplugged world3. Some might call this strategy ‘diversification.’

One foot out the door

Can you be tied to the world around you if your mind is set on escaping? Are you invested in the slightest? If the answer is no, then why do we let these select few build a world we’ll be stuck with when they flee the first chance they get? If you already have one foot out the door because you’re convinced that to stay is hopeless, then at what point is reality a foreign concept? And if you’re so sure that a certain outcome is inevitable, when does everything begin to look like a prophecy? And when do you decide that resistance is futile? You might as well get what you can while you can. Just make sure you get enough to help you get away at a later date.

Perhaps we can’t blame the tech billionaires for looking forward to their own big exit, when their investors expect their own such exit, usually in the form of an IPO or flipping the company at some multiple of their original investment.

Many in tech have long adopted Mark Zuckerberg’s mantra to ‘Move fast and break things.’4 But tech’s secondary mantra appears inspired by Matthew Good5:

We’ll stick to the plan:

The fall of man

The tech billionaires aren’t worried though, because as man falls, they will rise, whether to Mars, the Metaverse, or to the safety of their underground bunkers.

No big deal though. I’m sure they’ll wave bye and give a heartfelt thanks for all we’ve done to enable them to get the hell out of Dodge as they leave us to our fates6.

Jake LaCaze really doesn’t like being so sour about tech. But he’s finding it hard not to be.


  1. Survival of the Richest: Escape Fantasies of the Tech Billionaires on Bookshop.org (Affiliate link) ↩︎

  2. ‘Why is Mark Zuckerberg building a private apocalypse bunker in Hawaii?’ on The Guardian ↩︎

  3. ‘Inside the lucrative business of a metaverse landlord, where monthly rent can hit $60,000 per property’ on Fast Company ↩︎

  4. ‘The problem with “Move fast and break things”—Tech needs a better guiding principle’ on jakelacaze.com ↩︎

  5. ‘The Fall of Man’ by Matthew Good Band on YouTube ↩︎

  6. ‘Jeff Bezos thanks Amazon customers and employees who “paid for all this”’ on CNN ↩︎

Don’t be a SaaShole

Yesterday I had an idea for a mock LinkedIn influencer. He’d be a tech bro dubbed the SaaShole, who would serve as a blueprint for how not to do tech marketing.

The character would be a mix of Dexter Guff, from the satirical podcast Dexter Guff is Smarter Than You (And You Can Be Too)1, and Dom Mazzetti, from the BroScience YouTube channel2.

Or, if someone wanted to take a more sincere approach, they could call the program Don’t Be a SaaShole and share examples of how not to be a SaaShole.

Unfortunately, a quick Google search killed my ambitions, as I discovered the SaaSholes podcast3.

What is a SaaShole anyway—and why shouldn’t I be one?

I would define a SaaShole as a tech bro (or sis) who talks only in tech jargon to make him-or-herself sound smart rather than focus on solving a customer’s problems.

The SaaShole wants to sell his solution to make a quick buck, not to make anyone else’s life easier. Whatever your industry, you’re in business to serve your customers or clients. If you’re not doing that, then why the hell should you expect to stay in business? Why should anyone continue to give you their money if they’re not really getting anything back in return?

The SaaShole is a mindset. Despite its specific name, the SaaShole mindset doesn’t apply only to those in SaaS. It applies to tech all the way up and down the industry.

Often, tech companies are selling tech solutions to non-tech people—people who don’t identify as working in the tech industry. So tech bros (and sisses) are often better off assuming their customers know little about tech beyond how to check their email on their smartphone, because these customers aren’t concerned about the tech—they’re concerned about solving an issue and completing a task that they don’t view through the lens of technology. If tech can help them, great—they’re all for the help.

But for them, tech is a means to an end, not the end itself. (The good news is that if you’re wrong in assuming that your customers know next to nothing about tech, you can always deepen the technical explanations to meet them where they are. Starting with the default assumption your customers don’t know much about tech and then ramping up seems a better strategy than bombarding them with more they can handle and then trying to bring it down to their level.)

I’ve previously written about how I think tech suffers from a lack of philosophy beyond ‘Move fast and break things’4. Consider this post an addendum.

And lastly, if you work in tech, please don't be a SaaShole. Actually help people.

Jake LaCaze often has great ideas that other people have already had.

Processes and workflows before tech stack

When all you have is a hammer, everything looks like a nail.

The tech industry is sthe ultimate hammer in that it thinks tech is the best solution for every problem1.

And many businesses buy into the tech industry’s thinking, as they scramble for that Holy Grail, that one SaaS solution to rule them all and bring order to the chaos. So they run out and sign a contract and spend months and years importing their data and working with their vendors to make templates and custom reports that fall short of what the nice salesman promised them. The luster wears off and the company concludes they adopted the wrong system, so they start the process over again.

Fast forward a couple years and they’re back at the beginning of the loop, resuming the search for that one perfect solution.

What if the problem lies not in the tech but in what the tech is being tasked with—AKA the processes?

How much of what the tech is doing actually needs to be done? How many of those tasks could be removed?

Tech can work only if your processes and workflows are in order. By getting a hold of your processes and workflows, maybe you’ll reduce the need for tech in the first place.

And by removing steps—by practicing addition by subtraction—maybe you strike a better balance.

In terms of productivity and efficiency, we’re often too easily tempted to do more. American hustle culture gravitates toward the logic that more activity is the ideal solution. But sometimes the secret to doing more starts with doing less, or at least being mindful about what we’re doing and should be doing.

And we can often practice such mindfulness no matter what’s in our tech stack.


  1. Is AI just a solution looking for a problem? ↩︎

The problem with ‘Move fast and break things’—Tech needs a better guiding principle

If you move fast and break things, do you ever come back to clean up your mess? Or do you just look for the next thing to smash?


The October 2023 cover of Wired magazine irked me the moment I saw it.

Cover of Wired Magazine featuring the leaders of OpenAI, with the caption: 'Dear AI Overlords, Don't F*ck This Up'
Cover of Wired Magazine featuring the leaders of OpenAI, with the caption: 'Dear AI Overlords, Don't F*ck This Up'

On one hand, the cover irked me because it seemed to be saying that we, the commoners, are at the mercy of the lords of AI (let’s just scratch out ‘overlords’ for the sake of accuracy). And it bothered me, on the other hand, because there seems to be truth in the sentiment.

Why shouldn’t the lords of AI mold our future, since the tech industry has had its way so far in the 21st century?

But don’t we have enough evidence of why it’s a bad idea to let tech call all the shots?

We’ve already seen what happens when AI has free rein. All we have to do is look at the algorithmic wasteland that is now social media. Tech moved fast and broke a lot as it formed social media. But tech has yet to go back and fix the mess it created along the way.

And why should they? What’s their incentive? Companies exist to make money. Tech companies are no different. Nor should they be. But when you consider the reach of the industry’s influence (empowered by a hands-off approach from regulators), is it wrong to ask tech to be a better steward?

Leaning on AI in the form of algorithms has seen the internet flooded with example after example of misinformation and disinformation, making respectable journalism even harder to find in the 21st century. And as a recent lawsuit from The New York Times brings to light, the tech industry is at risk of doubling down on its prior negligence1. But, as is the case with social media, it’s not worth their time to go back and pick up the pieces. So, they never will.

Why should we trust these same companies to break more stuff with generative AI?

Tech needs a better guiding principle than ‘Move fast and break things’, one that recognizes the responsibility that comes with disruption.

Remember when your elders told you to leave things better than you found them? Why shouldn’t that wisdom apply to tech as well? Or when your mother said told you it’s not what you say, but how you say it?

The mantra ‘Move fast and break things’ has horrible implications. Why not focus on fixing things, a far more constructive act? Breaking for breaking’s sake doesn’t serve anyone, especially if we’re never coming back to build something better.

Tech needs better philosophy

So many of tech’s problems seem to come down to matters of philosophy, in that the tech industry doesn’t properly value people beyond their potential to become customers who buy tech’s ‘solutions’ that may or may not actually solve a problem2.

It’s easy for tech to adopt the philosophy of moving fast and breaking things when the results will benefit them. The tech industry is like a toddler who runs around smashing vases and busting windows, with a parent trailing close behind to clean up and apologize for the mess. Who wouldn’t love to operate in such a fashion?

AI in particular could benefit from adopting the simple philosophy below:

Helping humans > replacing humans

When we talk about creating or improving company cultures, many of us will utter the phrase ‘It starts at the top,’ meaning it starts with the people in charge. But I’d argue that truly great companies go one step further and start with a company’s ideals, which have the potential to stick around longer than any human employee can. And everyone who joins that company should be expected to adopt those ideals, because the ideals themselves, not who’s in charge, are the focus.

Tech needs better philosophy. Stoicism is great, but it’s not enough.


  1. The New York Times is suing OpenAI and Microsoft for copyright infringement on The Verge ↩︎

  2. Is AI just a solution looking for a problem? on jakelacaze.com ↩︎

Is generative AI codifying average?

There’s no such thing as the average person, or so the wisdom goes.

The logic says that if you were to create a profile of the average man or woman through a variety of factors—height, weight, income, weight, tolerance for Taylor Swift, etc.—you wouldn’t be able to find the real-live version of that person. (So, the next time your friend says they just wanna be average, let them know that they’re chasing one of the least attainable goals of all time.)

If the average human ideal doesn’t exist in flesh, might it exist digitally? This idea has stuck with me since I heard Dennis Yi Tenen make the following point about generative AI—more specifically, large language models (LLMs)—on episode 265 of Douglas Ruskhoff’s Team Human podcast1:

In a way, you’re having a conversation with an average . . . Imagine having a conversation with a thousand—or a hundred thousand—people, and I’m going to kind of average out the answer.

AI is math. A lot of math done really fast. But it’s math. While LLMs appear to be capable of thinking, they’re in fact just guessing with math. When answering a prompt, LLMs try to predict the best answer based on the most probable outcomes based on its training data.

So it appears that the developers of generative AI and LLMs have made average more accessible and more affordable, more quickly. And companies investing heavily into incorporating this technology into their everyday business may very well be investing a lot of time and money, and taking a lot of risk, for average.

Average is not smart. Average doesn’t stand out. So, average is bad business. Might that same money be better spent on something that makes the business special and more competitive?

With the help of LLMs, we’re one step closer to codifying average. In a matter of seconds after prompting, we can see what the average answer looks like for anything we’re curious about. If you need help just getting by, then average may be fine. But innovation and insight don’t emerge from average. Any Seth Godin2 fan knows that average is death for a business. Average means you can be easily swapped for another business.

Maybe average is fine for certain tasks that people are using LLMs for. But businesses should be sure that using generative AI helps them add real value elsewhere. Or, when all these businesses are using the same generative AI from the same small handful of vendors, they’ll most likely sound like every business in their niche.

With the help of generative AI, it’s becoming easier to bring average to the masses. And if that’s all the AI community is doing, then how long until the bubble bursts and the industry falls back down to a healthier average in terms of valuation?

Jake LaCaze is embarrassed to admit he's a middle-aged man who finds himself bouncing to Olivia Rodrigo tunes. 'vampire' is a banger, as the kids these days say. But it's also quite human.


  1. Team Human ep. 265: Dennis Yi Tenen ↩︎

  2. Seth Godin’s blog ↩︎

Concerns for businesses using LLMs

Integrating LLMs into your business may not be a quick fix.


Large language models (LLMs) seem to be expensive, energy-hogging toys at this point. Some companies—most notably Microsoft—think integrating LLMs like ChatGPT into everyday business is a great idea. But I’m not so sure.

Below are some concerns I have for businesses going all in on LLMs.

Hallucinations

It’s well known that LLMs make stuff up (AKA they hallucinate).

What’s the root of these hallucinations? Will an LLM hallucinate with your business' proprietary data? Does the amount of data processed by the LLM affect its likelihood to hallucinate? If so, what is that threshold? How much time do you expect employees to spend validating the LLMs claims? Is that cheaper than having a human do the work in the first place? Who, outside of AI developers, wants to babysit an LLM all day?

LLMs are terrible at math

Most business reports are math heavy. People use Micorosft Excel almost exclusively for calculations. But LLMs struggle with basic math. (I shared a simple example on LinkedIn recently.(1)

How can anyone trust an LLM to create crucial reports that may heavily rely on math? How can we know that the LLM understands these numbers?

How can you train an LLM to your company’s style?

LLMs are kind of like supercharged search engines. You put in a prompt (kind of like a search term) and you get a well-written answer. But what you get isn’t perfect, even if it’s 100% accurate.

LLMs tend to be verbiose and give way more information than needed (which also makes their claims harder to validate).

Every industry has its jargon, and individual companies may even have unique jargon.

How do these non-AI companies train LLMs for their needs and wants? How expensive is this training? How much time will it take?

Jake LaCaze doesn't hate the idea of using AI where it works and is appropriate. But a career in oil and gas with a brief stint in marketing has made him wary of any hype.


  1. An example of Google Bard struggling with math (LinkedIn) ↩︎

Can the internet ever be fun again?

The internet isn’t fun anymore. That’s the claim made in a recent New Yorker article1. A claim with which I agree.

So why isn’t the internet fun anymore? Let’s answer that by first looking at why the internet was fun in the first place.

In the early days, people were on the internet because they wanted to be. These early adopters were curious and adventurous, at least in a digital sense, so they experimented to see what the internet was, what it could be, and how they could help shape it. No one yet knew what would work. For better and for worse, there were no best practices. So people took chances and made strange sites that appealed to certain niches, thereby creating digital communities. And if you stuck around, you’d accepted that everyone you knew wouldn’t be on the World Wide Web. More than that, you embraced this fact. The uncertainty that accompanied not knowing what you’d find was a feature, not a bug.

Compare those early days to the current state of the internet. People aren’t on the internet because they want to be, but because they feel they need to be. For many of us, an internet presence is self promotional. We put the time in because we hope to get something tangible in return–something that shows our time ‘invested’ was worth it. (For the record, this applies at times to your author. Otherwise, I wouldn’t still have a LinkedIn profile.)

On today’s internet, most people don’t want to spend time in unfamiliar places with unfamiliar faces. Most of us don’t want to start over with new social networks and new communities, in part because pursuing something new means taking time away from something you’ve built elsewhere. A couple decades ago, trying something new on the internet was a great example of having nothing to lose and everything to gain. Now, for many of us, the opposite is true.

Also, digital communities are harder to come by. Twitter/X, Facebook, Instagram, TikTok, YouTube–these aren’t communities. They’re instead mega aggregators. People don’t use these services because of a shared interest. They’re on these services simply because they are. Because they’re online. And now, thanks to the abundance of broadband Wi-Fi and smart phones, simply being online isn’t the gatekeeper it once was.

Is it any wonder no one seems happy when the internet feels like a ubiquitous obligation? We’re no longer online to have fun. Instead, we’re like Marshawn Lynch–‘I’m just here so I won’t get fined’2.

But Lynch, an NFL running back, was contractually obligated to attend interviews. So he made light of the obligation wherever he could. Most of us don’t have to be online. But we feel as if we must, so we don’t contribute to the community.

And then there are the issues of the look and feel of the internet.

Screenshot from Bluesky describing the state of the internet in 2023
The state of the internet in 2023 according to Kyle Marquis on Bluesky - Link to original post

The internet is now highly centralized, dominated by four of five major players. Any new platform that gains attention risks being acquired by one of the majors and maybe abandoned or shut down. And most sites sites not owned by the big players are plastered in ads and popups and autoplay videos, making the content you came for inaccessible, particularly on mobile.

Screenshot of a recipe website with a video on top of a signup popup
A video on top of a signup popup on a mobile site—This is the hell Kyle Marquis was warning us about.

The modern internet has been optimized–not for users, but for corporations. And as the great poet Cyndi Lauper warned us four decades ago, money changes everything.3

Anything that gains a major following online and sticks around will most likely be monetized at some point, as it deserves to be. Maintaining these sites and services isn’t free. So, is this just the fate of the internet for the most part? Was the era of fun a brief window in the late ’90s and early 2000s? Is it gone forever?

Jake LaCaze wants the internet to be fun again.


  1. Why the Internet Isn’t Fun Anymore ↩︎

  2. Marshawn Lynch: ‘I’m just here so I won’t get fined’ (YouTube) ↩︎

  3. ‘Money Changes Everything’ by Cyndi Lauper (YouTube) ↩︎

Is it time to let the Twitter dream die?

Nietzsche shocked the world when he declared God is dead. (Kids in the Hall, not so much1)

Now, digital philosophers hope to do the same when they declare the death of Twitter.

On one hand, Twitter will live on through X, whatever the hell that becomes. On other other hand, the essence of Twitter was gone long before Elon Musk bought the platform.

So what’s next? Most people are trying to answer this question by finding a comparable replacement. How can we fill that bird-shaped void in our souls? Mastodon is too confusing for normies. Bluesky isn’t open to the public and is still available only via an invite code. People seem to be over Threads, as usage has recently dropped over 80%.2.

Why do we need a one-to-one trade? Why do we need to replace one platform with another? What if we instead replace Twitter with something else completely?

Cal Newport recently quoted the author Neil Gaiman as admitting his own blogging, an activity he once enjoyed, had suffered due to microblogging via Twitter3. Gaiman doesn’t think any current platform will replace Twitter. If he’s right, then that means something unlike Twitter must replace it. What will that something be?

So many of us keep waiting for something to recreate the early vibes of Twitter. But what if Twitter was little more than a moment on the internet? What if that moment is simply gone, lightning that won’t strike twice no matter the platform?

The essence of Twitter was killed by the pressures of profit. Can any platform serve on the same scale while resisting those same pressures? Someone has to pay for these services, one way or the other. Servers and development ain’t free.

Maybe Twitter should serve as a warning sign of what’s most likely ahead for most platforms like it. And maybe we shouldn’t seek to replace Twitter but to find better, more niche alternatives.

Maybe Small is the New Big isn’t just the title of a marketing book by Seth Godin4. Maybe it’s also the future of the web.

Jake LaCaze knows it's time to let the Twitter dream die. Yet he's still on Bluesky.


  1. ‘God is Dead’ skit from Kids in the Hall (YouTube) ↩︎

  2. Threads Has Lost More Than 80% of Its Daily Active Users by Gizmodo ↩︎

  3. Neil Gaiman’s Radical Vision for the Future of the Internet by Cal Newport ↩︎

  4. Small is the New Big by Seth Godin (Amazon) ↩︎

At what point does AI rob us of our style?

As we rely more on AI, aren’t we at risk of sounding just like everyone else?


The prophets of AI continue to promise their favored tech will make our lives easier. Thanks to AI, more of the things we want are only a click or a prompt away. You can now inject AI wherever you want, as AI can help you with writing, creating music, and editing images, as just a few examples.

I don’t fault anyone for using these offerings, especially because I have used them in my own way and will continue to do so at different points in my life. But I still have concerns. At what point does technology rob us of originality–and when is that scenario okay, and when is it not?

Should we be concerned about the loss of originality in terms of cold hard facts? We likely don’t care about originality when it comes to complex math–think balance sheets and revenue forecasting. In those cases, the work to get those numbers isn’t the point–deciding what to do with information is the point.

But what about fields we’ve traditionally considered more artistic? Fields like writing, music, and graphic design. In such fields, there’s not so much separation between the process and the end result. So, the process is more largely the point. What you choose to include or exclude may be subjective. These decisions are part of your style, one of the more crucial aspects of art. Number crunching doesn’t leave much room for style. But the arts are all about style.

As we remove ourselves from the creative process and forfeit agency to AI and algorithms, at what point are we enabling the erosion of style?

The prophets of AI will say that AI tools can unlock creativity previously unrealized. Maybe that’s true for a small segment of people. Call me cynical, but I imagine few will put in the time to learn how to improve results from prompts. Most will put in minimum effort and take whatever AI gives them, leading to an ever-more homogeneous internet. The future is more likely to be less original. The tools meant to empower us will instead make us all the same.

People probably don’t expect numbers to have personality and quirks. But we expect these personal touches from artistic projects.

Art goes beyond having the right answer. Art is also about the habits of the artist–AKA style. Style is the artist’s most-cherished asset. Style is what makes the audience relate to the art and the artist.

As we further integrate AI into art, are we at risk of losing those stamps of authenticity we unknowingly put in our work? Those little hints that remind our audiences that we’re the authors of our own works? And if style is the most valuable thing we have, are we smart to risk losing it?

Jake LaCaze has been been using images generated by DALL-E 2 as cover images on this blog as a joke, but he thinks this one is actually damn good.

Is AI just a solution looking for a problem?

A quick video in which I question the approach of the prophets of AI, and what it means for us


Back in June, I recorded this quick video I posted on LinkedIn, in which I asked if AI developers are putting the cart before the horse.

So now I want to share that same video with you.


Thanks for watching.

Or, if you prefer to read–no worries, just check out the transcript below.

Transcript

(edited for clarity)

Is AI the ultimate example of a solution looking for a problem? Or, to use another analogy: Is AI the ultimate hammer to which everything appears a nail?

When you solve most problems, you usually start with the problem itself. You identify what’s wrong and you have an idea of how you want it to be better. You then work your way through the problem and escalate as needed.

In so many situations with AI, it seems like we’re going backwards, as if we’re saying, Here’s a powerful tool–what are some major problems it can solve?

It seems we’re having these great advancements in AI, but we’re not adopting or using the technology as quickly as the developers would like. It kinda feels like they’re forcing it, like they’re trying to squeeze it in wherever they can. In so many situations, there identifying real problems–and technology can likely help–but I’m not sure AI is needed in all these situations.

So I’m worried that we’re going too extreme.

I’m not afraid that AI is capable of replacing humans. I’m afraid that it’s incapable of replacing humans but that certain people will try to make it replace us anyway.

Content quality over content source

Either a work is inspiring or insightful, or it’s not. Stop qualifying the work by saying it was created by an LLM or another form of generative AI.


I recently made a tongue-in-cheek post on LinkedIn, directed as a jab at how some people give large language models (LLMs) too much credit simply because they’re machines.

Screenshot of my stupid post on LinkedIn criticizing LLMs
Screenshot of my stupid post on LinkedIn criticizing LLMs

This silly post got me thinking about content quality vs. content source.

If you disagree with the point of my post, that’s fine. You’re free to criticize it, poke holes in it, and tear it apart. I ask only that you would do the same if this post were created by an LLM like ChatGPT. Please don’t be one of those people who would think the post were insightful if written by a machine trained for countless hours on terabytes and terabytes of data. In this situation, the result is far more important than the process.

LLMs and other generative AI must be held to higher standards. We must stop pretending these models are smart just because they use so much data. Data alone is useless without critical thinking and insight. If the models and their algorithms are flawed, there’s only so much the models can do with more data.

My own model, JakeGPT, is trained on nearly 40 years of experience as a real-world human being, including a marketing degree and 15 months in tech marketing. JakeGPT may not have been trained on the largest dataset, but at some point, data is no longer the limiting factor–so more data is not the answer.

Until AI can replace humans everywhere, it will be necessary to relate to humans to influence them. Data and facts and figures–the strengths of AI–can go only so far. Humans still respond to story, and personal stories are more effective than the generalizations that LLMs churn out.

Personal story and insights are the strengths of JakeGPT. Sure, the model is flawed and unintentionally biased in its own ways. But so are models like ChatGPT. And JakeGPT needs less data, less training, and less electricity. And perhaps best of all, JakeGPT is less likely to empower bad actors looking to deceive or harm others. (But if JakeGPT does ever go rogue, it can’t be used for nefarious purposes at the same scale as other models.)

And the cherry on top: JakeGPT plays for Team Human.1


  1. Team Human Podcast ↩︎

There is no invisible hand of technology

Technology doesn’t progress on its own simply because we expect it to.


On a recent episode of Andrew Yang’s Forward podcast1, Walter Isaacson shared an anecdote he picked up while shadowing Elon Musk for the entrepreneur’s recently-released eponymous biography2. In this anecdote, Musk made the point that people take for granted that technology progresses on its own, as if it’s an unwritten law of the universe. As if things just move forward with time.

I haven’t been able to get Musk’s point out of my head after first hearing it. Why? What’s so significant about it? What does it really mean?

To me, it means humans have agency in shaping their future. More importantly, humans have a responsibility in shaping that future.

Too many of us accept that things will just work out. Or that they won’t. Whatever our outlook, we get complacent. We take whatever we can get. We accept the future and consequences we’re dealt. We blame the dealer even though we never acted on our chance to cut the deck.

Technology is not some mysterious force. There is no invisible hand of technology moving it in one direction or the other. Technology is the byproduct of our creations and the norms we create around using those creations.

At the time of this writing, AI is all the rage. I don’t think AI is worthy of being injected into every aspect of our lives, but unfortunately, that doesn’t mean it won’t be injected into every area and situation possible. But this strategy is unforgivably reckless, because, as John Oliver said in his bit about AI on Last Week Tonight3:

The problem with AI isn’t that it’s smart–it’s that it’s stupid in ways we can’t always predict.

On his Substack, Gary Marcus recently echoed this sentiment when he pointed out how DALL-E 34, the latest version of OpenAI’s image generator, had problems showing black doctors with white patients, or a watch showing the time of 1 o’clock.5 We still don’t know the ways in which AI has unintentional bias. And we do a disservice by presenting AI as being limitlessly intelligent, when in fact its capabilities very much depend on how it’s trained and what it’s trained on.

At the risk of becoming a broken record, I’ll say it again: AI has potential. We should explore where and how it can help humanity. But we must do so in a responsible manner. We must be thoughtful and deliberate. Right now, we’re being anything but.

Technology doesn’t simply advance just because. It moves along the path we create for it. And I hope more people will start chiming in on which path we set this risky technology on. Just because there are risks involved doesn’t mean there aren’t great benefits waiting. But again, those benefits won’t happen on their own. We must play our part in making those benefits reality.


  1. Walter Isaacson on Elon, X, and breaking the rules ↩︎

  2. Elon Musk by Walter Isaacson ↩︎

  3. Last Week Tonight on AI ↩︎

  4. DALL-E 3 ↩︎

  5. Race, statistics, and the persistent cognitive limitations of DALL-E ↩︎

Be here now

Can we be mindful in the 21st century?


Introverts make up at least one-third of the population—maybe as high as one-half—yet in so many ways the world feels as if it’s made only for extroverts. How can it be that our social systems benefit one type of person while alienating the other?1

Pop culture often presents the introvert as being inadequate and odd, a type of person to be fixed or merely tolerated when possible. Introverts are often described as antisocial, but it would be more accurate to say introverts have a different threshold for social interaction. I, as an introvert, recognized this distinction during the lockdown phase of the COVID-19 pandemic. Before the pandemic, I thought I’d be fine in isolation. But forced social distancing revealed that I craved interaction. Interaction itself wasn’t the issue—the quality and frequency of interaction were the real questions.

And those questions of quality and frequency have led to my questioning online interactions, mostly via social media. This extroverted world expects us to be everywhere online at all times. Digital tools are available to help us scale, to be present at many places at once. But operating this way leads to the problems of the quality and the frequency of conversation. We’re told to keep the conversation going so that the algorithms favor us and push our content to more viewers in the name of promoting the conversation—conversation we may not even want to be part of.

In her book Quiet: The Power of Introverts in a World That Can’t Stop Talking2, Susan Cain argues that introverts need seclusion to recover after interactions. Introverts tend to perceive more than extroverts, Cain says, so introverts have more to sort through after social situations. This always ‘ON’ world of social media means introverts have an endless wave of interactions to process, all of this in addition to their offline interactions.

But what about the physical world, the one we live in without the need for screens? Where’s the concern about making sure we’re present there, and that we can process all happening around us? How can any anyone hope to process anything when new events are constantly dinging for our attention? We’re constantly connected to the world at all times. But why? Do we want to be? Should we want to be?

Questions like these are the ones that have been floating in my head since I started reading Present Shock: When Everything Happens Now3 by Douglas Rushkoff. What exactly does ‘present’ mean? What is the true present moment? The tangible present, or the virtual present?

Are these questions as concerning for extroverts? Or, do they feel the more presents, the better? If you see an issue, you must then consider the costs, both the costs of being so present and also the cost of not being present. You must find your own balance and determine when and where you want to be present—or have the energy to do so. Some will try to convince you that you must be everywhere. But if you’re everywhere, are you really ever anywhere? This is the same question I ask of those hoping to scale their presence with the help of AI. Doesn’t being everywhere in such fashion cheapen the worth of your time and presence? Isn’t your scarce availability the true value of your presence? Is there any added value in our truly being present? Will anyone know the difference?

The promise of ‘community’ is supposed to be part of the appeal of social media and the modern web. But so many digital platforms seek to be a one-size-fits-all solution for the masses. ‘Community’ and ‘masses’ are often conflicting terms. How often can we have community if we invite the masses? To be noticed on these platforms often requires appeasing to the masses while ignoring your potential true audience, meaning the masses then distract from the true community.

Our presence is more than a simple commodity. Or is it?

By embracing digital extroversion, not only are we giving away our attention and our presence—we’re also giving away data, which recently may have been used to train generative AI models we now fear may take our jobs4.

Living as an extrovert introduces noise, both literal and figurative, into your life, which is fine if you’re up for it. But the extroverted web doesn’t want you to slip away to recover and rejoin when you’re ready. The extroverted web says you’re missing out on the endless firehouse of content that will be outdated and irrelevant by the time you learn about it. You’re also missing out on exposure, as the most crucial part of the online growth formula seems to be consistency, meaning you must constantly churn out content so that your audience doesn’t forget you.

These days, there’s far too much content to stay current on. And what kind of audience do you have—and what’s your relationship with them—if the volume of your output is exponentially more valuable than the quality of your output?

The tech giants have built their platforms on our content. They’ve simply given us a place to connect, but we do the hard work of creating content that keeps eyeballs on the page or on the app. No wonder the tech giants love the extroverted model.

Eventually, digital extroversion turns into neediness, in the form the need to be liked and accepted, to increase the chances of being watched. A need to be interesting without offending, for fear of having your content demonetized or shadowbanned. This neediness risks becoming a need to fit in, to be like everyone else online—an NPC5 in a vast sea of unimaginative homogenization where imitation is often the safest path to success.

For some, this formula may not be a problem, especially if the main goal is to be popular. But the rest of us likely find ourselves sucked into this way of thinking because it’s so prevalent—we don’t even realize we’re influenced by it. So we end up chasing a goal we may not even want. We no longer create only for the sake of it. We create for likes and views and follows, making the internet far less interesting and dynamic.

Offline, introverts often get out of their shells to put themselves out there and meet the world halfway. But to have meaningful sustained discourse with introverts often requires pursuing them to some degree. Maybe this is how it should be online as well. Offline, we have the option to go to so many parties that we never attend. Maybe this is how it should be online too. We can try the parties every once in a while when we feel up to it, but the rest of the time you can find us at our websites and email addresses. Reaching us may take a little effort on your end, but hopefully it’s worth it.


  1. So Begins A Quiet Revolution Of The 50 Percent on Forbes ↩︎

  2. Quiet: The Power of Introverts in a World that Can't Stop Talking by Susan Cain on Bookshop.org (Affiliate link) ↩︎

  3. Present Shock: When Everything Happens Now* by Douglas Ruskhoff on Bookshop.org (Affiliate link) ↩︎

  4. X may train AI with its users' posts. Are other social media sites doing the same? on ZDNet ↩︎

  5. What Does Is It Mean To Be Called An NPC? The Gen Z Insult and Slang Term Explained on Know Your Meme ↩︎

Calculating the costs of convenience

What does convenience cost us in the long run?


Listen on Anchor

As long as there are people, there will be questions about the human condition. How are people doing? What are their greatest struggles and fears and joys? And what does it mean to be a real human being1 at any point in time?

Odds are good I won’t make it out of the 21st century alive. So considering what it means to be human in the 21st century seems a good place to put my energy for the next version of my blog.

In so many ways, life has never been better for those of us in the first world. We’ve spent most of our lives in unprecedented safety and convenience, thanks at least in part to technology. But, strange as it may sound, might that same convenience bring about our greatest challenges? The world is at our fingertips thanks to smart phones and other mobile devices. But having these remedies to boredom always at arm’s length makes it hard to be present in the real world beyond the screens. We must also fight the temptation to live always in a digital world when tech titans are always telling us what a great option it is. And don’t forget that these same titans have engineered their products and services to be addictive to keep us coming back. As they keep us addicted, they’re harvesting our data and doing who knows what with it. We now know they’re using that same data to train AI with the hope of replacing us. Many businesses will adopt these AI ‘solutions’ haphazardly, putting humans at risk in the name of efficiency.

I grew up enamored with technology, believing it could make life better. I still believe it can, with some caveats. Inserting technology into a process or situation doesn’t guarantee success. And it’s hard to do the right way. Drumming up hype and excitement is easy, but those same elements make it hard to know if the technology is actually useful, or useful to the extent promised.

We’ve reached a point where we need to put this convenience into perspective. Has it really been all that great? Has it served us as users? Even if everything was great in the past, that doesn’t mean we have to go along for the ride in the future. Maybe some people feel the past cost of convenience was fine, but they’re not so sure about the costs of what’s ahead. Or, maybe we don’t have a choice. Maybe certain things are set in motion, meaning the future is already determined and there’s nothing we can do about it. To be clear, I don’t hold this view, but if you do, I hope you’ll agree we should consider what lies ahead as we can best prepare ourselves.

I don’t pretend to have a crystal ball that shows the future. While I ponder our future, I’m no futurist. I prefer to discuss how things can go rather than how they will go. So while I don’t have predictions of the future, I do have concerns and hopes. I’m concerned that certain advances in technology seek to make humans irrelevant2. But I hope we can find our way to maintain–and improve–our humanity.

What does it mean to be human in the 21st century? What are our best parts that should be amplified? And which parts should be improved upon? These are fair questions, and there are so many similar questions deserving consideration. So I hope you’ll join me as I do my best to do them justice.

You can follow along via RSS or email. And feel free to drop me a line from time to time. I like communication with a human touch.


  1. Video: ‘A Real Hero’ by Electric Youth ↩︎

  2. The Art World v. The Tech Bros: A Story of Arrogance, Hubris & Lies by uckiood ↩︎

Social media engagement algorithms and the illusion of choice

So many of us, over the last couple years, have been rethinking our relationship with social media and the internet at large.

My own wonderings about technology have seen me dabbling into using only open source operating systems and software. But I’ve recently realized that while I appreciate open source and like the idea of all technology being open source, I am not an open source purist or absolutist. Like so many digital citizens, I have concerns about privacy and security. But in these areas, again, I am not a purist.

If I feel this way about technology at large, it only makes sense that I have similar concerns about social media. And I imagine I’m not alone.

So if I’m correct, then it makes sense to ask:

What’s the problem with social media?

Social media used to be a way to stay in touch with friends and family and random weirdos you found in various corners of the internet. But now it feels like this thing we do out of habit, even though it drives us crazy.

The problem with social media overwhelmingly seems to be the engagement algorithms. The efforts to keep us coming back for more to boost ad revenue, even if that increased engagement results in angrier users.

What’s so bad about social media engagement algorithms?

Angry users—a side effect of these engagement algorithms—is definitely a problem. But I think there’s another factor that doesn’t get enough attention: These engagement algorithms lead to an illusion of choice.

Users put effort into finding and following relevant and interesting voices, only to have another factor—the abstract yet opaque engagement algorithms—determine their experience on a social network.

When we don’t know how these algorithms work—or when they’re working—how can we be confident in our ability to build or curate own digital experience? Where is the line between being responsible for our own experience and being manipulated by mysterious forces we’re ill-equipped to fight?

As Cal Newport has pointed out, social networks like Facebook, Twitter, and Instagram built their services through network effects. People joined these sites to connect with others they knew or were interested in. And the fact everyone they knew was already on these established sites was enough to keep them there. Leaving behind your connections and starting over was too costly.

But these social media giants gave up the advantage of network effects when they started using engagement algorithms. They threw away the main reason people used their services. And in so many ways, the move from network effects to engagement algorithms felt like a bait and switch.

Why aren’t engagement algorithms on TikTok a problem?

While concerns about security and privacy on TikTok appear valid, this post will ignore, but not discount, those concerns to stay on one point.

While users may have concerns about the types of content the TikTok algorithms serve, most users are not bothered by the presence of algorithms themselves on the service. Unlike the case with Facebook, Twitter, and Instagram, engagement algorithms are part of the appeal of TikTok. On TikTok, the algorithms are a feature, not a bug.

TikTok can get by with using algorithms because TikTok doesn’t give users the illusion of control. TikTok doesn’t pretend to deliver content based on whom you follow. It’s common knowledge that TikTok’s engagement algorithms, aided with data from your views, likes, comments, and shares, decide what you see in your main feed. You have to choose the Following feed for the hope of seeing content from those you follow. So while keeping up with those you follow is an option, it is not the default. TikTok is made for finding engaging content, not for keeping up with those you already know. TikTok kind of gives you the option to follow individuals, but it doesn’t put much effort into that angle.

While TikTok may be worthy of criticisms in some areas, the service deserves credit in terms of algorithms. While parent company ByteDance may not be transparent about how TikTok’s engagement algorithms work, it has at least been transparent in the fact TikTok operates through engagement algorithms.

Users are aware of the presence of algorithms when they sign up for TikTok. They know what they’re getting into. And they’re mostly fine with that because they’re not signing up to keep in touch with friends and family as they did on other sites.

Can social networks have any value once engagement algorithms are present?

The network effect seems to persist only as long as social media services steer clear of engagement algorithms. A couple such examples include Mastodon and micro.blog.

Networks on LinkedIn once had value because the connections were likely to be genuine, in that you either knew the person you connected to, or you had an interest in that person. But now, many connections are made only for the intent of increasing who sees a member’s content, AKA engagement.

And once users figure what gets engagement, it’s only natural that many would start creating content in the tried-and-true formula. So users see the same types of content over and over. Originality exists on these platforms. But it goes unseen, unrecognized, unappreciated. And so mainstream social media becomes the digital suburbs, full of cookie cutter houses lined with the same bushes and political signs.

What’s the answer to social media engagement algorithms?

While web3 promises to solve all our digital woes, I find the solution to be a simple and old technology: RSS.

Watch the video below if you’re unfamiliar with the wonders of RSS:

🗒️ Note: This video is 15 years old, so it doesn’t address that Google Reader is now dead. At only $15 a year, Miniflux a great alternative. Or check out Reeder 5 or NetNewsWire if you’re on Mac/iOS/iPad OS.

How does RSS fix the illusion of control?

RSS is the obvious choice for one simple reason: It doesn’t give the illusion of control—it instead gives actual control.

With RSS, you decide what to subscribe to. You decide what’s important to you. You decide what you pay attention to.

You are once again responsible for your online experience.

Sure, curating your experience takes a little more effort than relying on social media engagement algorithms. But it’s worth it.

The simple math of web3

What if Web3 isn’t an evolution but a move to something like the web’s original form? Not an arrival but a return. A regression of sorts. A devolution in the most positive context.

These are some of the questions I’ve been asking about Web3 over the last few months. And these questions hint to my hopes—but not my predictions—for the future of the web after the apparent falls of Twitter and Meta.

What would a return to the early days of Web1 look like?

Noah Smith recently wrote what I had previously only spoken:

When I first got access to the internet as a kid, the very first thing I did was to find people who liked the same things I liked — science fiction novels and TV shows, Dungeons and Dragons, and so on. In the early days, that was what you did when you got online — you found your people, whether on Usenet or IRC or Web forums or MUSHes and MUDs. Real life was where you had to interact with a bunch of people who rubbed you the wrong way — the coworker who didn’t like your politics, the parents who nagged you to get a real job, the popular kids with their fancy cars. The internet was where you could just go be a dork with other dorks, whether you were an anime fan or a libertarian gun nut or a lonely Christian 40-something or a gay kid who was still in the closet. Community was the escape hatch.

Smith’s recollection of the early days of the web sum up what many of us are aiming for: A return to a special kind of community, rather than another opportunity to interact with those we already get enough of offline.

The technology of Web2 will not disappear. But how we use such technology may—and should—change.

And so Web3 may not be the next release number but instead a matter of simple math:

Web 3 = Web1 philosophy + Web2 technology

What exactly is the philosophy of Web1?

The philosophy of Web1 basically the promise of Web3: Decentralization. Fragmentation. An internet that’s harder to silo into a few sites and services.

write.as founder Matt Baer has often criticized Web3 and made the point that decentralization is already possible through technology we take for granted or may have long forgotten, such as email and RSS.

Molly White, creator of Web3 is Going Just Great, has often branded Web3 and its related technologies as solutions in search of a problem. She’s also made the point that technology on its own rarely solves problems. Change is often aided by other forces such as regulation.

In this case, the change we need must be aided with philosophy, which will then change how people use technology.

This moment in time brings up another point: More technology is not always the answer, especially when we’re not properly using the features we already have. So better philosophy and better usage are better paths to seek and take.

What’s so bad about Web2 anyway?

Web2 has not benefitted users nearly as much as the few corporations who have used their network effects to consolidate power into a handful of services. This model needs to fall. And I hope—but do not predict—that it will soon.

But Web3 of the crypto/blockchain kind is not the answer. If anything, it will only further complicate the digital landscape.

The technology for decentralization already exists. All we have to do is use it the right way, something we haven’t been doing for the last decade or so.

Digital Minimalism and philosophy in tech

I’ve been rethinking my relationship with technology since I started reading Digital Minimalism by Cal Newport.

After mentioning this book is usually when a blogger tells his audience he’s deleted his social media accounts and can now be reached only by smoke signal.

But this is not that kind of post, dear reader.

I appreciate that Digital Minimalism is not a book of prescriptive, one-size-fits-all advice for living with technology in the 21st century. While Newport himself is no fan of social media, he leaves it up to individuals to define their own relationships with the tech they use regularly. Newport’s most important message is that you think about where technology fits in your life, not that we all reach the same conclusions and use or avoid all the same services.

Digital Minimalism is not a how-to guide. It is instead a guide calling its readers to develop their own philosophy about where technology fits into their lives.

After I started reading the book, I deleted from my phone nearly every app not related to calling, texting, or navigation.

Newport suggests suspending use of any problematic apps for a month. He refers to this time as the “decluttering” period. Once the decluttering period is over, you re-introduce the temporarily banned tech back into your usage and observe whether you think it still has a place in your life. Newport claims that often people realize they no longer need the tech, making their decluttering periods permanent.

I made it a week before I reversed my declutter, because I’m lacking in moral fiber. But I have kept most social apps from my phone this go round.

And though I have deviated from Newport’s recommendations, I have started creating distance between myself and my phone. I’ve started leaving it behind in other rooms of the house. And my short break does seem to have made putting my phone down much easier when I know I need to.

My declutter has made me realize how much I prefer the desktop (or laptop) experience over the mobile experience in most situations. I’m an elder millennial, so I’m better with a traditional keyboard and mouse than I am with an onscreen keyboard.

I recently got a couple used (or “previously enjoyed”) laptops through my job. The laptops are nowhere near the latest and greatest specs. They can’t upgrade to Windows 11, but they run Linux just fine (currently Solus).

Still, these laptops are thin and powerful enough for everything I need. Ten or fifteen years ago, it would been impossible to think of how I could ever need anything more. Especially when you consider the near ubiquity of public wifi.

But now, in the age of smartphones, we want the same conveniences once reserved for laptops at our disposal through these devices many of us keep in our pockets at all times.

Perhaps it’s easy to gush over tech like laptops when propping it against the smartphone. Perhaps the smartphone is a scapegoat, the villainous flavor of the week.

I do not believe eliminating smartphones will fix everything. We would find a substitute for distraction, perhaps laptops and desktop computers.

But that’s a problem we can address when we’ve improved our relationships with our smartphones. This acknowledgment makes us better prepared in the war for our attention.

While Newport doesn’t say everyone should delete all social media, he doesn’t hide his opinion that social media holds little to no value. These days it’s fashionable to jump on Newport’s side and crap on social media while ignoring any benefits.

The reason I fell in love with the Internet way back in the late ’90s is the same reason I stick around on social media and related platforms in 2022: The potential for connection that’s harder to find offline.

Connections made over the Internet are not a substitute for connections with my family and other people I see offline. While I can get along with almost anyone I meet face to face, there are very few I can nerd out with on anything that truly interests me. Or at least not to the depth I want to go.

Also, I tend to hop from interest to interest. It’s always nice to know I can find other parties interested in the same things, on the Internet, often in the form of social media.

Perhaps I would feel differently if I were part of some sort of establishment I could fall back on.

But I would be disingenuous to gloss over my gripes with social media, which relies on sloppy algorithms to decide which content is worth promoting. (I’m looking at you, Meta, LinkedIn, Twitter . . .)

When I think back to my favorite times in online communities, they were often in communities that hadn’t yet been adopted by the masses. And while that may make me sound like an idealistic hipster who wants to keep his hangouts under the radar so that he can have them all to himself, I find my defense more practical than that.

The simple truth is that, in most cases:

Mass adoption = commoditization.

And once you start catering to everyone, you end up serving no one. And, at some point, the experiences all run together, as the users and their avatars do. And you get an experience similar to what you likely find offline, in which few of the experiences stand out above the rest.

Before reading Digital Minimalism, I was becoming convinced that a return to smaller online communities was the best path forward. I still believe that in theory, though I haven’t begun practicing it as well as I should.

I’m not sure of the exact limits of this practice either. Obvious candidates include places like micro.blog and niche Mastodon servers. Maybe even the smaller subreddits. I suppose you can create an insular experience on Twitter if you follow the right people.

While I would like to see a break from the worst of Web 2.0, I’m thinking Web 3.0 is most likely not the answer.

Does that mean Newport’s preference for walking away from social media is the answer?

I’m not ready to jump on that train. But I can’t blame anyone who does.

I hope this essay shines some light on the need for philosophy in all aspects of our lives, even in technology.