Some thoughts on Image Playground

I wrote a blog post some folks here might find interesting…

Tech pundits don’t understand the potential for Apple’s Image Playground

Over the past few weeks the mostly male tech pundits have been going out of their way to discuss just how bad Apple’s Image Playground is. All of them that I listen to have said the same thing. The latest is Manton Reese on the Core Int podcast who can’t imagine a use case. To be fair, his co-host Daniel Jalkut at least acknowledges that normal people might find it useful or will, at least, be entertained by it. Thankfully Manton and Daniel’s discussion was, as usual, measured and thoughtful, most tech guys are not so careful.

I’ve said many times before that a problem with the tech press is that they don’t make an effort to get out of their heads and their specific use case. Everything they write or discuss is from their own, very limited perspective and more often than not it seems overly cynical as they assume worst case scenarios. In the case of Image Playground they’re going out of their way to trick the app into making the kinds of imagery Apple is seeking to avoid. It’s gotcha journalism but without the journalism…
Read more

2 Likes

I was happy when Image Playground finally arrived on my system, after weeks on the waiting list. But I got tired of it after a few minutes. “Playground” is right. A small child might be amused for longer.

2 Likes

Image Playground is interesting, though it seems more of an outline of an idea for an image-generation app. I like that it builds on a mix of memes, which is an interesting way to generate an image, rather than depend on the users’ skills in writing verbal prompts. The results are often bland, and the variations it offers are not much different from one another. Image Playground has only been in the wild for a month or so, and it’s easy to cut it some slack.

Katie

2 Likes

Image Playground gave our son’s dog an extra foreleg!

6 Likes

Yeah, no doubt, it can do some goofy stuff!

That said, it doesn’t take much effort to get excellent results. For a blog post written this morning, again pointing out that there is actually a great use case that tech folks keep overlooking as they fall all over themselves trying to shout the loudest about how terrible it is. Simply put, it’s a fantastic upgrade over clip art for normal humans that use Pages, Keynote, etc to publish newsletters, school reports, presentations, etc. A few examples from this morning using Apple Notes to generate the art:



3 Likes

Every time I seem to have a drastically different opinion on something, I wonder if I am the one who is wrong. :wink: You can say the tech press is wrong, but people like Jason Snell, who I probably respect the most out of all the tech press, pretty much is saying the same thing, that it is good for 5 minutes of fun.

Which is how I feel, it was neat playing around with, but I doubt I will ever use it again, which is exactly what other real life friends have said as well. I mean it is cool, but it screams AI art and there is a real push back against using AI for creative purposes on the general web. I guess it’s no worse than using the usual horrible stock photos people tend to use otherwise.

1 Like

My sense of it is that the tech press and tech enthusiasts that frequent forums and use betas are far from the normal user. Over the years I’ve volunteered with quite a few small nonprofits and in that role I’ve put together countless newsletters and annual reports both as a paid staff member as well as volunteer. Image Playground would have saved me a lot of time and with the same or better results.

As recently as last week I had a client request a flyer/mailer that included several illustrations. Her budget did not include the time it would take for me to do actual illustrations. In such cases I often use the built in stock art tool in the Affinity apps. For her flyer I used Image Playgrounds and got something better. She loved it.

I guess I’d be interested in asking folks here, have you ever put together a newsletter for a community association or some other club, non-profit, etc? Ever helped a student with a report, presentation, or some other document using an app like Pages or Keynote? Would not the use case I’ve highlighted be an huge improvement over the standard, basic clip art?

I’m just not seeing tech press actually report on anything other than “I used Image Playground to make an image of a friend or myself and it was terrible” kind of thing. And that’s fine for their short minute of entertainment. But goofing off making images of friends is not the same as taking the time to consider real world uses and, you know, reporting on the technology like a journalist might.

4 Likes

I haven’t tried much of the “clipart” type stuff like @Denny showed but that shows promise.

What’s bothering me right now is the mystery around how Playgrounds chooses the people that you can use as a “base” for your generated portraits, why I can’t name people who aren’t in that list (but I do have tagged/named in Photos), and how I can remove people I don’t want.

I already found at least one word or phrase it refused to do anything with. Oddly enough, the word that I thought it found offensive worked in Genmoji. So either it’s different rules or it’s using a different context.

About people and the base for generated portraits, is it not possible to use the photo browser/picker and pick any person, named or not? That works for me. That also allows you to choose any photo in your library for non-human images of scenes and objects. I’ve gotten what I consider to be excellent results using that method.

I suspect people will get used to them. Humans are wired to be curious about and distrust new things.

I often use free unsplash pictures in my newsletter and sometimes a picture does tell a thousand words, even if it was someone or something else’s picture.

1 Like

I deleted my post because debating this is pointless.

Anyway, people will get used to it, because everywhere is AI, AI, AI. It will become the norm. I doubt it will ever be liked though. This thread is a case in point. The only person anywhere I have gone online who likes Image Playground, is Denny. It’s been trashed pretty much everywhere as a fun toy with little to no use.

Fair enough. None of it really matters.

I’m on team Denny.

The point he is making, and I think he’s absolutely right, is that at least parts of the “technosphere” (tech bloggers and podcasters) is increasingly only representing and understanding itself and its obsession with tech for its own sake. And that does lead to a kind of monoculture: where certain opinions dominate and many use cases are ignored or not even thought about.

That’s fine except that the technosphere is seen as a source of helpful opinion especially for consumers who do not have the time or means to explore it all themselves and experiment with what would work for them. There’s an awful lot of “review” and reporting of new products and services in tech blogging and podcasts. If that loses sight of what matters to a wide range of users, it loses value. Most users want their tech to do things for them.

Incidentally, one of the things I value about MPU is that the hosts are very willing to accept that what might please them ($4,000+ laptops, $6,000 displays) is not necessary or even relevant for vast numbers of users and that simple is often better.

Apple generative image AI (genmoji, image playground etc.) feels like an early iteration right now, but I predict it will sit there quietly in the background, doing an increasingly good job over time to support messages, mind-mapping, document writing, note-taking, whiteboarding, presenting, newsletters and any other task in the Apple ecosystem, where you might need an illustration.

2 Likes

I completely agree with that part, particularly in the Apple tech press/blogger/podcaster area. You don’t find varying opinions, and it seems like whatever the latest news item is they all cover it and pretty much all agree on it presenting the same viewpoint (which is why I cut back on my Apple related reading/listening in the last few years). Then I get on an Apple focused forum/Reddit and see a different take on things. I just don’t think this subject enforces that idea though, since the Apple AI stuff has been regularly been paned all over the internet. At least in places I read.

Of course Denny is often posting his blog posts here, so he is part of the group we are talking about. :wink:

2 Likes

True inasmuch as I have a blog and sometimes write about Apple! That said, I continue to puzzle over what I perceive to be a lack of professionalism on the part of those getting paid to comment. These days the majority of paid “content creators” publish everything from their personal perspective. Where are the publishers that reach out and make an attempt to understand real-world professional use of Apple tech?

I’ve been using and writing about Apple for 24ish years, but never for an income, just an enthusiast. Even so, I enjoy taking the time to do “research” and have made it a regular practice to gently interrogate friends, family (and occasionally clients as it’s often relevant to the work) about their use.

After our conversation here last night I called my sister who is has been the office manager for an elementary school for the past 10 years. Previous to that she volunteered with the school’s PTO for 5 years. She’s one of my go-to people when I’m looking for real-world use-cases in education. She works with teachers, other admin staff at the school as well as staff in the central district office. I regularly talk to her about the tech they use, from hardware to the sysadmin systems in the central office to the local, school specific implementations of things like Google forms, etc.

In mid November when I wrote my first post on image Playground I’d reached out to her and a teacher. I’d offered examples and we chatted briefly then about whether they thought it would be useful in the school. I pursued that line of questioning last night offering more examples and digging into the ways that they currently use images and graphics in the elementary school.

My conversation with her only confirmed my current thinking. My suggestion to Apple focused tech journalists is to make more of an effort to understand what’s happening in this area and what Apple is offering. Their groupthink cynicism is not helpful and it’s not serving the user or potential user community.

3 Likes

I’ve tried it, but thus far each attempt has resulted in “Try a photo with the face more in view.” These are photos that Photos has correctly located and identified the peoples’ faces in, so there’s enough for that app’s algorithms to work with, but not Playgrounds.

You did better than me. Despite the fact that my iPhone is set to put new apps in the library only, Apple dropped the playground icon on the home page. I deleted it and never launched the app.

So far, IMO, Apple Intelligence has been mainly “chips and dip”. They are behind and they are serving up snacks because the main course, i.e. a new Siri, won’t be making an appearance until Spring 2026. I have no interest in Playground or Genmoji, etc.

But I will be trying to find out why the Siri that arrived on my three day old iPhone 16 Pro keeps saying it can’t tell me the weather or temperature, etc. until I unlock my phone!

Image playgrounds v1 is kludgy and even buggy, especially where it interfaces with other apps (e.g. using it integrated in the new MindNode you have to select the text box for prompts before you can see a preview that’s big enough to perceive) and like all generative AI it does not magically translate what you are imagining or describing into an illustration: results may be jarringly odd or disappointingly banal and you do not have even the level of control that something like Adobe Firefly gives you (where you can prompt for style, broad colour palette etc) but, like a lot of Apple apps it’s right there, embedded in the OS and integrated into apps. It will get better.

It’s already doing real work for me. I like how MindNode is so easy to use and reliable, but it’s never been visual enough for me. I learnt mind-mapping on paper, before there were PCs, with a big box of pens and spending at least as much time sketching in and then embellishing illustrations, decorating branches and links and using text style (hand-lettered) to reflect associations and concepts I was mapping as I was just making the nodes on the map.

Now I can attach images to nodes in MindNode using Image Playgrounds I am rediscovering the power of creative imagery in mind-mapping. Simply looking through what has been generated to decide which one is best to attach is building lots of non-verbal associations for me in the topic I am exploring. I could always attach clip-art but the immediacy (I don’t have to leave MindNode and can stay in the context) and forcing me to choose which generated illustration I want to go with is great.

I’ve recently rediscovered the power of colour, images etc. in helping me to think and construct my understanding through Craft with its new styling features, and using Adobe Firefly to generate cover images and backgrounds, and that I might be able to use similar approaches across apps and especially in mind-mapping is exciting.

It’s a journey: generative AI from Apple has not arrived, but I am happy to see it taking its first steps.

4 Likes

I have a question, but first, I want to make it abundantly clear that I’m not challenging or critiquing. My question is genuine. I give that caveat because I don’t want my question or motive for it to be misunderstood.

I’m intrigued by your interest in illustrating your mind maps. I use mind maps frequently to outline my thoughts, especially for articles and presentations. The idea of spending time to add images or illustrations to my mind maps seems like a “waste of time.” How does devoting time to find or create images to add to your mind maps help with thinking and connecting concepts? Again, my question is genuine; it is not a challenge or a critique. :slightly_smiling_face:

It comes from Tony Buzan who popularised (his legacy company say “invented”) mind-mapping in the 1970s and 1980s. He sums up what he thinks a mind-map is in this YouTube video

On a deeper and maybe less precise level than Buzan’s explanation is two insights about thinking and learning:

  • No-one thinks and learns purely verbally/textually. There are always non-verbal ways in which we build our own understanding, whether we realise it or not. We use imagined images and even sounds and objects to make sense of things, especially when we are trying to understand something new and find ways to build it into our mental model of the world. It’s hard to describe these processes precisely because they are non-verbal.
  • Thinking and understanding is massively associative. We link new ideas and existing ideas by their relationships to what we already know and to our experiences. These associations are very often non-verbal.

To give the example I explored in my Masters Thesis (a long time ago!) successful computer interfaces rely on many layers of imagery and model and the associations we have with them. A desktop on a screen is and is not like a real desktop. Files are and are not like real files in a cabinet etc.

So, if I am trying to get to grips with some ideas, or make sense of a topic, especially deeply, a good tool for thought is likely to help me use imagery and models alongside structures (like outlines) and text to build and reinforce associations, many of which are deeply personal. Putting images and sketches in notes, using colours etc. is, in my view, as essential in this process as finding the right words to express something.

If I am confident in my understanding of something, then text-heavy approaches can abstract the thought and be productive - I might just need to write or use a text outline or text-only mind-map to polish my expression, but if I am trying to work something out, the more visual and genuinely interesting and creative I can make the process (mind-map, whiteboard, flip-chart) including doodles, sketches and illustrations, the quicker and better I am likely to learn. For me, simply realising that a particular illustration fits or does not fit an idea, helps me grasp the idea.

I recommend reading or watching some Buzan. It’s quite old now, and our understanding has developed, and he’s a classic “salesman” type personality — so much so his enthusiasm and energy, and fierce defence of his unique approach makes me think I am being scammed — but there are lots of things in there that make me stop and say “that’s so obvious, why is no-one else saying it”. He did two series on the BBC just as I was at the end of High School and about to go to University which dramatically changed how I studied for the better and a lot of which I still rely on, even if I have forgotten where it came from.

Hope this helps

4 Likes