Retired since 2012.
2385 stories
·
5 followers

Creating the 4D Resources library with Claude Code

1 Comment

This weekend I used Claude to convert my Notion database of 350+ bookmarks into a resources page on my Framer site (with a Chrome extension to easily add and edit them). There are so many new tips and tools to keep up with, especially with AI changing EVERYTHING, so this is a feed/toolkit of new, favorite, or niche resources. Add it to your own bookmarks: 4Dthinking.studio/resources

(I’d been squirreling links away in Notion for years, but ’s great site and newsletter of design resources made me realize that a public Notion URL, while efficient, was a missed opportunity.)

It only took a day, and I thought some other people might want to do something similar so here’s the prompts and process I used.

Think in 4D is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

How to build a links database with Claude Code

First, think

I wanted a cool web page to display my links with custom sorting and filtering. I wanted a Chrome extension to add the items, since all the tools I still use regularly have extensions. I wanted tagging as part of the bookmarking, so I didn’t end up with another Inbox of saved links to process. And later in the process I realized I needed an easy way to edit the items or delete dead links.

Think: What are all the steps in your current workflow, and how could they be better?

Then, build

I started in Claude.ai — not a Project or anything, just the regular chat. With all the talk of “prompt engineering,” it’s easy to feel like you don’t have that skill, but YOU CAN JUST ASK CLAUDE. Look at how dumb my prompts are, and things turned out perfectly:

  1. how could i build a custom chrome extension that lets me grab a url, add tags, and save it in a public resources database like omglord.com

  2. ok, can you make the modal have fields for title, blurb, phase, link, date added, and then a checkbox for “fav”? i would like the web interface to be part of my Framer site if that’s possible

  3. how do i load the extension in chrome

  4. can you make Tags optional, and make Phase an autocomplete single select from the following attached options (in the same order):

  5. where do i get the firebase api key

  6. i don’t see Build → Realtime Database

  7. in framer, do i add an Embed component or a Code Block

  8. how do i get these existing resources into my db (csv attached)

Just those 8 prompts above and I had a working demo! A Chrome extension, a Firebase database, and a Framer embed (none of which I had ever created before).

The Chrome extension

Then, design

Once the prototype worked, I went into Figma to decide how I wanted it to look.

And then back to Claude for more fixes and questions

  1. for the framer component, could it look like this?

  2. [many rounds of detailed design feedback, e.g. please reduce the space between the phase tags to 1px, and put each row on its own line like in the bulleted list]

  3. oops i forgot to upload the mockup with the hover colors

  4. is 4d-minimal.tsx the new code to cut/paste

  5. what are all the other .tsx files in the /framer-variants folder for?

Finally, tons of UX revisions

  1. if i want to delete a resource later, what’s the easiest way to do that

  2. option 3 would be great, yes. also, for the filter tags below the phase tags, please use this design. Also, my site uses Millionaire for the serif and Acumin for the sans. See the CSS on https://ericaheinz.com/ and please set the framer extension up to use those fonts

  3. [more rounds of bug fixes]

  4. i updated resources.csv locally, how do i update firebase again?

Here I finally switched to Claude Code, which just edits the files in a folder directly, bc I got tired of downloading the file before copy/pasting into Framer

  1. i need to update the firebase database with new tags. what’s the easiest way to do that

  2. [many more rounds of bug fixes and design tweaks, trying not to get mad] e.g.

    1. the tags seem to be missing from the data now. i used the csv importer to firebase, are they being seen as part of the URLs in the csv?

    2. A/V is the phase, animation is the tag

    3. what do you mean run it once

    4. copy.tsx still not working. please compare to this version from yesterday that was working

    5. NO, don’t edit copy! that’s the one that was working! update 4d-minimal.tsx

    6. where is the firebase console browser

    7. nothing is pasting, is there any other way to do this?

    8. the fields look like they have the right name, but the type field is missing. can’t i just rename the column titles in the CSV and reimport? why is the type field not being imported

    9. yes change to category

    10. here is the new csv. can i go ahead and import it to firebase?

    11. why do i have to run these stupid scripts? i didn’t have to do that before

    12. i have the csv open in Numbers, i can also just do a find and replace to change values before importing, or reformat the dates. i’m not a developer i’m a designer

    13. you built this importer

    14. it was in a different chat with you

    15. ok it’s working now. whew!

One afternoon and one morning of work:

Takeaways

  1. Build > Design > Iterate is the new process. Function first. Nuances next. Then endless iteration, and a much more polished product. When roles collapse (like Gary Chou talked about in my last podcast interview), you can get through dozens and dozens of iterations without saying a word.

  2. Be lazy! Ask Claude things you would normally go research or do, and it will often do work or suggest options you didn’t think about. (e.g. I asked if the list could animate in, and it asked what kind, and I normally would run off and do the research myself, but I said “where can i see examples of that” and it offered to build a demo of 3 options). Ask it “the easiest way” to do something, and it will often tell you about new tools (like Firebase for databases) or suggest feature ideas you hadn’t considered (like a View All page in the extension to edit/delete links)

  3. Be excited! I don’t know if these fast-food tools are rotting my teeth, but I definitely would not have gotten this project finished on my own. It’s cool to know I can now build Chrome extensions, and more

Any questions? Was this useful? LMK and I’ll do more.

Thanks for reading Think in 4D! This post is public so feel free to share it.

Share



Read the whole story
cjheinz
5 hours ago
reply
AI working under the close supervision of a genius human being - brilliant!
Lexington, KY; Naples, FL
Share this story
Delete

Nothing Works in Trump’s America — Except Racism ....

1 Comment

Nothing Works in Trump’s America — Except Racism. “Trump is objectively bad at running the government, but he’s objectively good at running a Klan rally, and his supporters value the latter so much that they forgive the former.”

Read the whole story
cjheinz
5 hours ago
reply
Wow.
Lexington, KY; Naples, FL
Share this story
Delete

Alien Politics

1 Share

First, an apology. I’m behind in posting for a variety of life-related reasons. For example, this week I’m writing from rural Western Australia, where we are currently sitting in the path of Cyclone Narelle, a category 3 storm that has ravaged three states and now has its sights set on scouring the west coast. There’s that. Then there’s the foresight contract I’ve just wrapped up, which took up most of my free time, and of course, preparations for the release of my new short story collection (it really is happening this spring!).

I promise to tell you all about all of this, in due course, and will soon be back on my normal schedule, with some bonuses to come (such as a pre-order window opening soon for the collection, Laika’s Ghost.)


Our Next Political Move

As you know, I’ve been thinking a lot about the future of politics, because if we’re going to bequeath a just and humanistic political system to our kids, we have to start building it now. There are a lot of moving parts to such a project, so I’ve been wondering how to boil them down to fundamentals. One thing that is clear is that the political frameworks of the 19th and 20th centuries are not up to the job. What is the most critical addition we need to make to our political systems right now?

Our future political freedom depends on us developing protocols that deliberately hold understanding at bay during deliberation.

If this sounds weird, it’s because we are in a weird situation, and that is the point. The kind of abeyance I’m talking about is not like working with statistical uncertainty; I am doing that right now as I’m watching the many possible paths Narelle could take, including some that pass directly over my head in the next 48 hours. That’s what you might call ‘normal’ uncertainty. What I’m talking about is more like Badiou’s Event, a concept I’ve written about before. But let’s try to avoid abstraction here.

Imagine a very near future (say, later this year) when people turn to AI systems such as Grok to help them decide how to vote. Hopefully, we all know by now that these systems are designed to be sycophantic, and therefore, reinforce our biases rather than expanding our worldview. ChatGPT, Gemini, Claude, and Deepseek are all bias amplifiers, just like social media. But they can be tweaked to nudge our thinking in particular directions. They are going to have a big impact on voting patterns if their owners (who are all oligarchs, except for the Chinese who are simply autocrats) have skewed their AIs’ models to reflect some partisan position.

This is just like the capture of journalism by the billionaires, so I won’t repeat arguments that others have made about that. It should be an obvious danger and therefore, politically, we need counterbalances in institutional or informal terms.

No, there’s a deeper issue here. It doesn’t have to do with AI’s sycophancy, but rather with its (and our) deep-seated drive to make things make sense.

The Bed of Procrustes

Large Language Model AIs aggregate humanit’s current understanding of the world. And, as Brian Boyd has pointed out, “if the human mind can understand something in narrative terms, it automatically will.” Whatever it is that is going on in the world today, we are frantically integrating it into a consensus-reality tale we’ve already written. The huge, under-examined problem is that if LLM AIs are bias amplifiers, they are also amplifiers of this integration process. They make things make sense to us, and they will try to do that even if the things in question do not make sense within any existing frame of thought.

We use them because they help us understand the world; and that is precisely why they are profoundly dangerous. See, there’s a lag between their training and what’s happening now; theorists and historians have not yet fully teased apart the phenomenon that is Trumpism, for instance, yet we’re living through it and have questions. LLMs are more than happy to answer those questions; but they will of necessity do so using the paradigms they were trained on.

This makes LLM AIs like Procrustes from the ancient Greek story, who would invite travelers to stay with him. If they were too short for his bed, he would stretch them to fit, and if they were too tall, he’d cut them down to size. This is what Large Language Models do with any situation we describe to them, because they represent the interconnections already present in language, and cannot reason or imagine new ones.

If something unprecedented happens, not only can they not recognize it, they will actively and cleverly confabulate an explanation that makes complete sense to us within the categories of thought that they’ve been trained on.

The AI apocalypse we should be worried about is not, therefore, them taking over the world and wiping us out. The AI apocalypse we should be worried about is one in which everything is explainable. AI is the Procrustean Bed for human knowledge.

Thanks for reading Unapocalyptic! This post is public so feel free to share it.

Share

The Department of Abeyance

Maybe aliens will help this make more sense. Say aliens land tomorrow, and we can kind-of communicate with them. There’ll be areas of overlap between our concepts and theirs. When they say something really strange, though, we have a couple of options. One is to take the weirdness seriously. Another is to treat what they’ve just said as nonsense and skip over it. Or, we can take what they’ve just said and shave off the uncomfortable parts until it fits the way we understand the world. We can Procustize them.

To take the weirdness seriously means to, firstly, admit it is there, and secondly to refrain from ‘fixing it’ the way Procrustes would. We’d have to learn to dwell with the incomprehensible. Based on the past of Colonial Europe in contact with the cultures of the New World, that ‘dwelling-with’ seems highly unlikely to happen on its own.

So is democracy, unless you have institutions that are designed to support it.

We’re rapidly institutionalizing Large Language Models and thus, their ability to explain the world to us. I propose that we create institutions designed to counterbalance the Procrustean problem by deliberately holding off—keeping in abeyance—understanding when we sense that, in some way we can’t yet describe, there is more to the story.

What would such a Department of Abeyance look like?

Subscribe now

Read more



Read the whole story
cjheinz
4 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

A comparison of different sorting algorithms (bubble,...

1 Comment

A comparison of different sorting algorithms (bubble, merge, heap, timsort). You can run them one at a time or race all seven.

Read the whole story
cjheinz
5 days ago
reply
Sorting algorithms are cool to watch.
Lexington, KY; Naples, FL
Share this story
Delete

Admitting when we’re wrong

1 Share
Admitting when we’re wrong

There are several iconic lines in the 1975 blockbuster movie, Jaws. For example, “You’re gonna need a bigger boat,” has been repeated or paraphrased in numerous films.

My personal favorite is when Quint, the crusty old fisherman played by Robert Shaw, tells Matt Hooper, the know-it-all young scientist played by Richard Dreyfuss, “Well, it proves one thing Mr. Hooper. It proves that you wealthy college boys don’t have the education enough to admit when you’re wrong.”

I’ll admit that, as the only one of seven siblings without a college degree, I used to favor common sense and traditional wisdom over academic intellectualism. Now that I am older – READ: old – I try to look at things from as many different perspectives as I can. However, I generally prefer scientific method over traditional wisdom or gut feelings.

It seems universally true that most people, including me, don’t want to admit when they are wrong. In “Why Some People Will Never Admit That They’re Wrong” published in Psychology Today, Guy Winch writes,

No one enjoys being wrong. It’s an unpleasant emotional experience for all of us. The question is how do we respond when it turns out we were wrong. (...)

Some of us admit we were wrong and say, ‘Oops, you were right. (...)

Some of us kind of imply we were wrong, but we don’t do so explicitly or in a way that is satisfying to the other person. ... We accept responsibility fully or partially (sometimes, very, very partially), but we don’t push back against the actual facts.

But what about when a person does push back against the facts, when they simply cannot admit they were wrong in any circumstance? What is it in their psychological makeup that makes it impossible for them to admit they were wrong, even when it is obvious they were? And why does this happen so repetitively – why do they never admit they were wrong?

The answer is related to their ego; their very sense of self.

I am a lifelong Democrat. Sometimes I’ve defended a Democrat, or Democrats, because of party loyalty, even when I pretty much knew they were wrong. For example, in 1998 President Bill Clinton was impeached primarily for lying under oath and obstructing justice while trying to conceal his extramarital affair with White House intern Monica Lewinsky. I mostly excused Bill Clinton on the flimsy grounds that, although it was wrong for him to lie, it was understandable and excusable and even honorable for him to want to spare his wife and daughter and even the nation from the shame and tawdriness implicit in the sexual affair.

Nowadays, however, I have zero respect for Bill Clinton. I admit that I was wrong to excuse his lies and bad behavior. I will further admit that many years passed before I was able to make that admission. As a journalist, I have to be thick-skinned. But maybe my ego was more fragile than I thought.

Winch continues,

Some people have such a fragile ego, such brittle self-esteem, such a weak ‘psychological constitution,’ that admitting they made a mistake or that they were wrong is fundamentally too threatening for their egos to tolerate. Accepting they were wrong, absorbing that reality, would be so psychologically shattering that their defense mechanisms do something remarkable to avoid doing so – they literally distort their perception of reality to make it (reality) less threatening. Their defense mechanisms protect their fragile ego by changing the very facts in their mind, so they are no longer wrong or culpable.

Indeed, rather than admit that Bill Clinton was a sleazy lying adulterer, I shifted the blame to America’s puritanical history and resultant prudishness, arguing that progressive European nations like France or Italy wouldn’t have a problem with their president lying about an extramarital affair. In a modern civilized culture, that’s what one does in such a situation, right? Er ... no, that’s not right.

Not everyone who voted for Donald Trump identifies with the MAGA movement. Some are lifelong Republicans who always vote for Republicans, just as I’m a lifelong Democrat who always votes for Democrats.

In Gallup News, Jeffrey Jones writes that a “new high of 45% in U.S. identify as political independents; more independents lean Democratic than Republican, giving Democrats edge in party affiliation for first time since 2021.”

He continues, “The recent increase in independent identification is partly attributable to younger generations of Americans (millennials and Generation X) continuing to identify as independents at relatively high rates as they have gotten older. In contrast, older generations of Americans have been less likely to identify as independents over time. Generation Z, like previous generations before them when they were young, identify disproportionately as political independents.”

I have mixed feelings about Independents. On one hand, I believe there are clear and distinct differences between the two major parties that make it easy for me to choose to be a Democrat. I focus mainly on what the Democratic party represents and not on individual candidates. On the other hand, I can understand and empathize with voters — especially younger voters — who are fed up with, and even exhausted by, the rancor and vitriol between the two major parties.

Moreover, identifying as an Independent allows voters the freedom to micromanage their political beliefs and decisions – as opposed to accepting the “party line” adopted and promoted by either of the two major parties. In short, party loyalty comes with a price – that Independents presumably don’t have to pay.

Independent voters in critical swing states such as Arizona, North Carolina, Georgia and Pennsylvania were key to Donald Trump’s victory in the 2024 presidential election. (Those same Independent voters also helped several Democrats win their Senate races.) Considering President Trump’s abysmally low approval ratings — on tariffs and the economy, healthcare, gas and energy prices, the federal budget, immigration, Iran, Ukraine — I can’t help but wonder if some of them now regret voting for Trump.

One anonymous man who voted for Trump in 2024 admitted that things were “not going well. I was looking yesterday and, you know, Americans have lost over $1 trillion in their wealth in the last year, while the top 1% has gained over $10 trillion. And it’s like, that’s not exactly what was supposed to be happening.”

Or is it? I would argue strongly that this is exactly what Trump wants – the rich are getting richer, while the rest of us are not.

Regardless of his motives, Trump has seriously soured the American economy, and his tariffs and policies have hurt the economies of numerous friendly countries such as Canada, Mexico, Japan, Australia, Germany and other European allies. Yet Trump’s MAGA base continues to confuse their stubborn loyalty and blindness to the truth with inner strength and moral conviction.

It’s commonly known that we all make mistakes. Although it’s painful, we need to admit our mistakes, learn what we can from them, and resolve to make amends for them if possible.

And that includes MAGA Trump voters.

--30--

&&&

Thoughts on this? Leave them in the comments below.

Read the whole story
cjheinz
7 days ago
reply
Lexington, KY; Naples, FL
Share this story
Delete

Shy Girl, AI In Writing, And A New Perniciousness

1 Comment

I wanna talk about Cameron’s The Terminator and Carpenter’s The Thing, but first, let’s get it out of the way —

If you know anything at all about me in this Current Era, it is that I am vehemently opposed to generative AI. I do not use it. I will not use it. It does not exist for me in any form — the only “use” I had of it recently was writing my Vital Cat Update, which copied from Google’s search engine AI off its main search page. Otherwise, I don’t touch the stuff. I don’t even know how to access it. I couldn’t tell you how to use Chat GPT or Claude or any of that. My copy of Word is one with Copilot not inside it, and I had to change my subscription to get there. I turn off Apple Intelligence in every instance I can. I am against AI because it steals our work, which it then uses to steal our jobs, which it further uses to steal our water and our electricity.

Which is to say, it is here to steal our future.

So, I’m against it! It sucks moist open ass.

But there’s a delightful (read: not at all delightful!!) new perniciousness afoot, and that requires us to talk a little about the novel Shy Girl, by an author who I won’t even name because whatever she did or did not do, I do not think directing theoretical harassment toward said author is really valuable, nor is it the point. The problem isn’t one book. The problem is the whole system.

To keep it as brief as I can, what happened was, to my understanding:

Shy Girl was a self-published novel. A horror novel. It came out a year or so ago, on its own, I think? It did well enough, I guess, though I don’t know that it set the world on fire — but somehow a publisher, Hachette, picked it up for traditional publication and it was to come out soon. Ten months ago, there appeared to be accusations that the book read like it was written by generative AI in whole or in part. Those conversations continued and appeared to boil over right around now-ish, and the current narrative is that the author did not herself use generative AI, but employed an editor who made changes to the book using generative AI, changes that the author did not — review? Did not catch? I don’t know for sure.

Certainly some aspect of this may be wrong, or new details may come out, and if you have corrective details, please sling ’em in the comments below.

That is the situation currently.


To switch tracks a bit, though you’ll soon see (or already can predict) where this is going: I’ve in the last several months seen an uncomfortable number of instances, usually on Threads, where someone will look at a photograph or a video or a piece or art or graphic design and they will assert, with dogmatic certainty, that is AI.

And sometimes, it is, or appears to be.

And other times, it definitely isn’t.

I’ve seen people look at a beautiful, very real but also very-processed photo, and say with their whole chest, that shit is AI, and sometimes that’s started a small little avalanche of people asserting similarly. And in more than one instance, I’ve seen the creator come back and post how that photo predates the current generation of gen-AI — it’s just a photo that looks either really good because of Lightroom or really overprocessed because someone wanted a slick HDR effect, or whatever.

This has also happened with writing.


It started with the emdash.

It was asserted, with Great Authority, that emdash use was a strong signifier of a piece of writing being AI.

The artbarf robots, they said, love that little emdash sumbitch so much, so so much, that they just can’t help themselves.

Needless to say, that made my bowels go to ice water because —

Holy shit, I love the emdash, too.

In fact, most Current Era writers I know love love love a fucking emdash.

But instead of making me sympathetic toward the artbarf robots — “Aww, it loves the same things I do!” — it only made me hate the artbarf robots more, because the reason the piece-of-shit AI loves an emdash is because it stole all our work, and all our work features a lot of goddamn emdashes.

It doesn’t use emdashes.

We use emdashes, and it stole our work and then mimics us.

Emdashes and all.


So now, with Shy Girl, what do I see?

I see some folks putting forth the “signs” that told them that Shy Girl was very obviously AI-written, and those signs include a number of stylistic choices.

And when I say stylistic choices, they are not choices that generative AI made, because generative AI doesn’t make choices. It just eats and regurgitates.

We make choices, as authors. Narrative ones, stylistic ones, and so forth.

But this list of signs and symptoms and AI portents included stylistic choices that I myself absolutely one hundred percent make. Same as the emdash. I’ve seen people say that AI loves metaphors, AI loves certain kinds of repetition, it loves adjectives no wait it loves adverbs no wait it loves alliteration no wait–

Of course, again, as with choices, AI doesn’t love a fucking thing, because AI isn’t alive, it isn’t intelligent, it isn’t aware. The key word is always artificial. It fakes it. It fakes choices. It fakes preferences. It fakes love. And it is able to fake it because it stole those choices and preferences from us.


I saw The Terminator last night on the big screen. I’ve seen it before, obviously — seen it many, many times. Seen all of them! Even the stinky ones. But I think this was my first seeing that one on the big screen. (It’s of course excellent, if occasionally a little corny and showing its age.)

But one place where it isn’t showing its age is how it still issues a sharp warning about AI — it’s long been held as a kind of bellwether for that particular threat, right? It’s an early iteration of the Torment Nexus meme. That warning has told us, hey, AI is going to get smart, get mean, it’s going to inhabit robots who want to kill us, it’s going to tangle itself up in our systems and decide that we’re a threat and drop a batch of nukes on our heads.

But I think one of the warnings in the movie(s) didn’t really register for me back then, but it damn sure registers now

What happens in the movie? The AI is going to pretend to be us, and it’s going to be get harder and harder to tell the difference. It’s going to wear our faces. Only dogs will be able to sniff it out. It can steal our voices — so when we call home to talk to Mom, maybe the Mom we think we’re talking to us actually dead, and it’s a soulless Cyberdene drone on the other end there.

That makes me think of John Carpenter’s The Thing, because it, too, understands that same threat, but worse — it understands the fear of being amongst your people except one of those people isn’t your people. Ohhh, no. It’s an Impostor, an alien being clothed in the raiment of your friend’s flesh, and soon you’ll be paranoid about who is alien and who is human, and you’ll have to work very hard to find a way to figure out just who is who — all that without accidentally killing a friend, or failing to kill the thing that wants to eat your face and then wear it.

Sound familiar?

The AI — artistically! — is us.

It steals our artistic skin.

It wears it, pretends to be us.

And it gets harder and harder to tell what’s us, and what’s it.


I’ve long said that one of the threats of AI is that it damages the fidelity of our information. Of truth and reality itself! It’s not just that it pumps out misinformation and disinformation — digital illusion and virtual legerdemain! — but rather that its mere existence makes it harder and harder to tell what is truth and what is fiction.

And we’re seeing that now with Shy Girl.

We’re seeing it with photos and videos and artwork.

People are right to hate AI — and the pernicious, insidious presence of AI has made them like the men trapped in that Antarctic base.

They are paranoid that it’s everywhere.

Because, ostensibly, it is. Or they (they being the techbros who are really the man behind the wizard curtain) want it to be. And it has a deleterious, corrosive effect on all that we do and all that we see. It’s like Paramount taking over CBS, or Musk taking over Twitter — it doesn’t matter that it becomes successful, it just matters that they ruin the ability to disseminate good information. To ruin truth.


So, what the fuck do we do about all this?

I have no idea. I mean, the obvious thing on the face of it is to keep your own garden free of it. Pledge to use no AI. In all the ways you can avoid it? Avoid it. But that won’t stop someone in the future telling you you’re using it. Or even using an AI detector — which is itself AI! — from “detecting” it. And it won’t stop others from assuring you that this photo or that video or this logo is AI, even when it’s not. That certainty has been ruined.

More to the point, I don’t know what this means for writers, for readers, and for publishing at large. Ideally, publishing gets ahead of this problem and tries to get commitments from writers to not use AI — but therein lies a rub, too, wherein a “no AI” contract looks like a “morality clause.” Without clear definitions, if enough people were to accuse you and your book of being AI — whether at the authorial level, the editorial level, or in some aspect of publishing — they can get it tanked whether or not AI has ever even chastely kissed the work in question. And it doesn’t inspire confidence when a publisher like Hachette published Shy Girl… when already the accusations of AI were afoot. Did they do their due diligence? I don’t know. Maybe! But given the lack of editorial oversight… ennnh, maybe not.

Do I think AI should be published? I do not. I think using AI at any of those levels is not only problematic for the reasons listed above, it also takes opportunity from an Actual Human doing the Actual Work of Being Human. A contract given to some slopwrangler is a contract not given to an actual writer. A fake book will take the place of a real one. It’s stupid fucking robots all the way down when it should be humans.

So, this is a snarled nightmare tangle — one where the existence of AI en masse is becoming its own problem, regardless of whether it’s presence in a single instance of art of writing. We’re just going to have to do our best going forward. We must pledge not to use it — but also try to be very, very cautious kicking other people under the tires of this bus without knowing for absolute sure what we’re accusing someone of doing. As AI gets better, the environment in which it exists is only going to get noisier and more confusing. And we can’t just stick a copper wire into the blood of the book to make it transform into the monster, revealing its True Self.

We just gotta do our best. Be vigilant, be cautious.

And don’t use the AI slop-shitting artbarf techbro bullshit.


SIGH.

I do not care for this era of writing and publishing, lemme tell you.

The faster we pop this bubble, the better off we will all be.

Good luck, friends!

And fuck off, robots.

Buy my books or I die in the abyss.

Read the whole story
cjheinz
11 days ago
reply
A lot of truth spoken here.
Lexington, KY; Naples, FL
Share this story
Delete
Next Page of Stories