What AI Trends Hide From and Potentially Teach Us
- Joseph Nockels
- Apr 14
- 5 min read
Updated: May 7
Instead of the usual transcription-based post, I wanted to jot down some thoughts surrounding AI-led trends. Last week will be etched into the history books as the advent of the ChatGPT AI doll/action figure trend, pasted all over your LinkedIn by suggested corporate connections you’ll never add, well-meaning colleagues and - in the case of two instances on my timeline, organisations making serious points. Greenpeace posted a doll based on the Friday for Futures movement of a young person, with items relating to increased rent and a protest sign about the climate. Another case saw a social worker purposefully using a selfie where he had noticeable bags under his eyes as a base for the image prompt, to make a point about the profession’s unreasonable stresses. We’ll return to themes of protest later on in this post …
Elsewhere, the trend grew enough to be covered by the BBC’s technology reporters Liv McMahon and Imran Rahman-Jones, who made a figure of their colleague Zoe Kleinman, presumably as light relief to current world events (don’t say the word … don’t do it … tariffs) (ChatGPT AI action dolls: Concerns around the Barbie-like viral social trend - BBC News). The piece certainly fell short of critical AI use, with a brief outline of their method and the broad appeal of the trend, which included the line “somewhere in a data centre some hot computer servers were toiling away to make Action Figure Zoe”, but to their credit the main problem areas of climate concerns and privacy were referenced with sources from Queen Mary’s and the creative industries.

As you can see, I confess I followed the trend. Perhaps this whole post is dealing with the resulting guilt, but I was genuinely interested in whether my prompt-engineering could improve upon McMahon and Rahman-Jones. Did I find anything out? Well yes, actually - the algorithm appears to pivot hard toward trend-based uses, possibly due to the amount of traffic asking for similar outputs or developer priorities shifting when seeing/directing the trend, as problems in Kleinman’s model did not appear with mine (text repetition, the cartoon effect), admittedly I only generated one image. I swear. Had the model learnt based on the increased interest paid to this action figure trend? It seems so.
Interestingly - in library world, the response to this trend and the BBC article has been different (at least within my algorithmic bubble). I saw librarians, whose daily work now includes advising faculties and students on AI-tools alongside more traditional forms of information retrieval, making their judgement clear of anybody jumping on the trend, mainly for valid environmental and privacy concerns surrounding OpenAI and ChatGPT. Again, I am sorry. Of course, they’re right and a lot of interest has been paid to unpacking the ethical implications of such uses of Large Language Models (LLMs), especially when presented as inoffensive chat-based interfaces.
This is not a post exploring that literature in depth, but currently we remain at a point where quantifying the environmental costs of these tools - let alone an individual’s carbon output to create a ChatGPT doll of themselves, is hard to ascertain. AI and ML processes are certainly computationally intensive, energy consuming and carbon output increasing. Strubell et al.’s (2019, p. 4) attempt to quantify the approximate environmental costs of training neural network models for 24 hours found advanced transformer models emit the same carbon emissions as a trans-Atlantic flight (https://arxiv.org/abs/1906.02243). More and more AI firms are transitioning to such transformer methods, suggesting they are more efficient (they appear to be in the case of my field of automated transcription), compact (depending on the architecture) and subsequently more environmentally friendly (doesn’t appear to be so at least based on this 2019 study), with less reliance on sequential predictive processes and the ability to infer responses based on historically fed data. In the Digital Humanities, adjacent to the library world, as much as some deny it, there is an increasing awareness of the environmental footprint of AI activities (Digital Humanities Climate Coalition, Information, Measurement and Practice IMP Action Group, 2022). However, methodologies are still needed to calculate the carbon footprint of ML research activities called on by Lacoste et al. (2019, 10.48550/arXiv.1910.09700), beyond carbon offsetting as Passalacqua highlights (2021, 10.1177/0022526620985073). In the case of the firms I work with, the environment costs are sometimes not even known to developers, with external dependencies on commercial servers and cloud storage subscriptions who refuse to provide such details, as well as training parameters being lost in the complexity of the model.
So we’ve established that AI dollification is costly to the climate, however we don’t precisely know how costly. The statistics are hidden.
In taking this as a departure point, what interested me further was whether AI trend discussions should be included in existing debates around climate and personal-corporate responsibility, such as those outlined by Cuomo (2011, https://www.jstor.org/stable/41328876). Should we start thinking about these implications with our individual actions - the creation of an AI action figure, when corporations are happy to follow technological deterministic roads that cough, splutter and pollute at a far greater rate? Or, in acting as responsible citizens, do our actions enable us to lead with more authority? Are we hypocrites to talk about these issues as we also use, experiment with and attempt to understand these tools through use?
The answer is probably a mix, like everything - equal parts in good measure, with AI firms encouraged to be more transparent regarding environmental footprints and individuals remaining conscious of making informed choices about their use. However, demagoguing those who have jumped on the trend potentially obscures this corporate responsibility. Skyrme and Levesque (2019, p. 15, https://doi.org/10.33137/cjalrcbu.v5.29652) articulate how individuals using tactics such as self-branding via social media is an “individual solution to a mass problem” that results in divesting personal energy from contributing to collective responses to a systemic issue. This is the exact point Ash Sarkar, the progressive Novara journalist, applies to broader identity politics in relation to protesting societal injustice in her new book Minority Rule.
So - in bringing together these two strands of thought, we should remain conscious of our individual uses of AI environmentally as well as collectively discussing how to protest corporate irresponsibility. Our acts of resistance stem from our own individual actions, which we externally orientate, draw attention from and thereby begin to hold decision makers accountable. Now, the question remains, has this AI doll trend revealed a disconnect between our value statements (that external projection of action) and holding decision makers accountable? Are we all decision makers when it comes to AI? In my mind, the decision makers are those directing the ethos of firms like OpenAI, which does not completely exonerate us as individuals but should direct our ire. Why this positioning? Well - because we are currently fighting with indignation what is not there. We first need to pressure firms to release data on their carbon outputs, analyse them and establish a position whereby we can start making more meaningful judgments. For now, let’s see what the next inevitable AI-trend reveals, while constructing anticipatory approaches to these broader ethical issues.
Comments