Day 3 – Creating images

Welcome to Day 3 of the 12 Days of AI. Today’s AI tool (or tools!) focuses on image creation.

Note: UAL staff and students have free access to Adobe’s image creation tool, Firefly. If you don’t have institutional access to Firefly, you can use DALL-E as an alternative. If you have time, you might want to try (and contrast) both of them.

What are Adobe Firefly and DALL-E?

Firefly is an artificial intelligence system for generating images from scratch that has been created by Adobe. This means that it will be integrated into Adobe’s tools and will coexist with Photoshop, Illustrator and InDesign. It has been trained to generate images based on the description of what you want to see. Similarly, DALL-E is a generative AI technology that enables users to create new images with text to graphics prompts.

However, Adobe Firefly has a few differences that set it apart from rival offerings (DALL-E , Midjourney and Stable Diffusion) such as, its claimed ‘ethical’ credentials. Many AI models were trained on images scraped from the internet with no regard for copyright. Firefly was trained only on open source images, content that is no longer in copyright and content from Adobe Stock. Adobe‘s and DALL-E‘s privacy policy.

Firefly on a Christmas tree; Image created with Adobe Firefly

How do I access it?

Go to Adobe Firefly and sign in with your UAL (or other institutional or personal) credentials.

Alternatively, access DALL-E and create an account. We have also listed other alternatives below.

Your Task:

Create an image based on your job role or position in society.

  • Go to Adobe Firely or DALL-E (or both) and ask it to create an image of yourself based on your job title. For example, “create an image of a fine art tutor working in a UK art college” or “create an image of a senior Librarian working in an Engineering University”.
  • Go to Adobe Firely or DALL-E (or both) and ask it to create an image of yourself based on your role in society, For example, “create an image of a parent who has three children” or “create an image of a rugby coach”.

Alternatives to Firefly/DALL-E

Optional reading

Things can get interesting when we start combining AI tools together. Earlier this month as part of our AI Conversations series, Dave White (XYZ, UAL) discussed answers to the question “Is AI stealing our Creative Labour?” (You can find the video recording here on this blog) After the session, Chris generated an AI-written blog post on the transcript using ChatGPT, Claude and Adobe Firefly. Read the post on his blog.

Also check out…

Join the conversation:

Reply in the comments below or post on X using #12DoAI. You may want to come back later to reply to others’ comments.

  • How did you get on with the task?
  • What did you think about the images Firefly/DALL-E created?
  • Did the images surprise you?
  • Is AI stifling creativity – or allowing room for more to flourish?
  • Were there any bias or stereotypes in these images?
  • Did you have to modify the prompt to make it more representative of the job/position you were looking for?
  • How do you think these types of image could be useful in your practice?

Join the competition

We’re running a competition during this course where you can win actual prizes! Learn more aboutt how to enter the competition.

51 Comments on “Day 3 – Creating images

  1. Image generation is probably the element of GenAI that I have found most useful and the easiest type of content to create over the last few months. Firefly Image 2 seemed to produce higher quality images of a person than DALL-E and I liked that when I downloaded the image, it had ‘content credentials’ which state that the image has been generated by AI which I think is important for transparency. For me, as someone who works in a non “creative” industry, I have found image-genAI to be really helpful and has encouraged me to think more creatively. I often have ideas about how I want an image to look and the imagination for such things, but I don’t have the technical or artistic skill to do so. Therefore, for people like me who work in a non-creative industry, I think these tools are fantastic! I didn’t have to modify the prompt with regards to my job role but no amount of prompting would give me straight brown hair – every image had curls! 🙂

  2. I used DALLE, Wasn’t very impressed with the results, after asking a very specific question about myself. do think its allowing more creativity, but I’m not sure about the quality just yet. I had to modify the question a number of times to get anything near to what I thought the images should look like. Still not sure it captured my prompts though. The final images all looked false, particularly the eyes.
    I’m not sure that I would use DALLE images for presentation work.

    • That was my experience with DALL-E too. All the images of ‘me’ I found quite traditional and boring and I didn’t see myself there at all. After quite a few tweaks to get away from the stereotypes, it started to be a bit more acceptable, but the more I tried to make it do what I wanted, the more distorted the faces became.

  3. Thanks for the tip about Firefly. I’ve been too cheap to pay for an AI image generator thus far and the free ones I’ve found a little underwhelming. Turns out my institution has a licence for Adobe inc. Firefly. Lots of exploring to do .. thank you!

    • Yes — it turns out my institution has a license for this as well. Who knew?! Very useful to find this out, and the open-source credentials for Firefly are a bit reassuring.

  4. I used Adobe Firefly (for the first time) with the prompt “Create an image of a learning technologist working at a Dutch university” and was somewhat surprised by the results. Three of the four people pictured were young white women with long blonde/light brown hair and glasses (the fourth was a young white man with long light brown hair and glasses!) and ‘learning technologist’ seemed to be interpreted as ‘has paper or a laptop’. Two of them were working outside, I assume to show off the grey Dutch landscape 😉 I realised that including ‘Dutch’ in a visual prompt was irrelevant for my needs, but when I removed it the four images had exactly the same gender and race balance…and this time two of the four had a white coat and a stethoscope! My prompt is perhaps too vague—not that that explains the gender and race imbalance—so I’ll keep getting more specific to see if that helps. As it stands, I would not be happy using these images in a professional context.

    • Interesting as I put in similar but for a UK university and was impressed that all were female, and only one was white. I was assuming the Adobe had done something to try to address the usual racial imbalance.

  5. I tried Firefly image 2 and it’s quite powerful, you can upload an image to match the style of the generative image that you would like and there are settings allow you to generate the image with different angles, effects and colour tone, etc. However, you only have a certain credit to generate image with the trial and it’s difficult to get the right image adjusting the prompts before you are running out of credit. And you will certainly lost track of time in creating an image with different prompts as the whole setting is so experimental. If you use stock image website to search for an image, the search and filter functions are so powerful. It’s still much more efficient to do a search rather than creating an image from scratch with the prompts with AI. I would only use Firefly or other GenAI image tools to generate image if I need to create something more imaginative and difficult to get the license like editorial only images.

  6. I asked Adobe Firefly to ‘create an image of an academic developer working at UK red brick university’. Here are my top line thoughts guided by the discussion prompts: it had no idea what to do with red brick apart from add it into the background; there was racial diversity in the images generated (surprising to see, given what I’ve read about gen AI in the area of representation); age range was all identical (30s); generated people are middle class based on appearance i.e., dress (probably accurate but not great); a laptop is visible in every generation (implying ‘developer’ is taken to mean a programmer or similar); there are extremely weird visual artefacts, especially around hands; it is quite good at generating faces.

    Overall, interesting as an exercise but these images are unusable in my work given the numerous oddities/mistakes.

  7. I am a Media Design lecturer, multimedia artist and photographer and tried the prompts artist and photography lecturer. Both Dalle and Firefly gave me similar results: 1 dark skinned male, 2 women and 2 men, the photography lecturers are wearing glasses and the artists no glasses.

    Firefly seemed more realistic, Dalle gave me horror faces, bulging eyes and strange hands. When I typed in German lecturer the types slightly changed and became more blond. I also tried female middle-aged art school lecturer in the UK. It gave me light – dark skinned women but all were overweight. Bias here?
    I am not impressed with the results but the idea is probably to give the AI system an example photograph to get better results or combine different tools. I should find the time to do so.

  8. i used Firefly (DALL-E told me I had to purchase credits) and it is fine, ive been using it a bit recently, sometimes trying to get a good image can be trial and error, especially since you can apply various filters but ive found it is best to keep it simple
    I am using it simply to get illustrative images not to create artwork, at present they all seem pretty generic but that may be me and am sure other tools can be used much more creatively. These tools will allow the majority of people to create standard and pretty good images and that will be the baseline for all images. Those who are more creative will create much better images as they will have a specific aesthetic sense that they will have developed.
    Regarding bias: the images were all of young people (im not so asked it to modify to middleaged) and were all white (i am but I did not specify white but that was the default)
    I will probably use other ai images in presentations or guides

  9. I’ve tried a few image generators recently so the task was easy enough. In a previous activity, I was trying to show the biases present in image generators, so the images I was given in this task did not surprise me as such.
    I don’t have access to Adobe Firefly so I couldn’t try the options there, but yes, in my experience, images generated via the free options are inherently biased, particularly when we ask them to produce images of people.
    My prompt was “create an image of a female Learning Technologist working in a London university”. All 4 images from DALL-E were of white women, of a similar age, all sitting in front of a computer. There were no women of colour, no head gear, no glasses and they were all probably around a size 12. Needless to say, this is certainly not reflective of female learning technologists across the globe.
    I can see why image generators would be useful in my role, but I would need to be very specific or careful with my prompts to ensure diverse results.

  10. After using both Adobe firefly and DALLE, I have to say totally unimpressed with how they both depicted my role as a librarian, neither could understand the concept of a digital library, despite me adding prompts such as futuristic, no physical books, they both insisted on adding the traditional library background. Also there appeared to be gender bias as how it perceived a librarian Firefly 3 images of females and 1 man DALLE 3 males and 1 female.
    With the second prompt there were pros and cons to both how they depicted me as a daily walker in a park with exercise equipment using equipment you typically find in a gym, again gender bias towards women as walkers.
    Overall both choose images of white people which I am not and needed to specify gender, colour and age to get somewhere close. In DALLE you can see recent images created, Firelfy did not appear to have this function.

  11. I’ve just tried it to generate labelled diagrams; both the Carbon Cycle and a human heart looked visually pretty much as I’ve have expected them to; but in both cases, the labels were clearly generated images of text and looked more Cyrillic than latin letters when scrolling quickly over them. Here are 2 examples, one from Bing and another firefly – I even told the latter to include labels in English https://flic.kr/s/aHBqjB5LcL

  12. Hi, think it’s great that you shared Gary’s comment in this morning’s email. For me, he captures both the sense of amazement at, and the obviousness of the impact of, thinking machines. I wonder why, then, the links you share on the blog (in the Also check out) tend to be negative (e.g. the free ones today)? There’s lots of academic resources out there which embrace the enthusiasm colleagues have for our AI future. For example, Vanderbilt have a follow-on to the prompt engineering course you’ve mentioned: “[AI is] about augmented intelligence, where we amplify and augment human creativity and problem solving skills, where we basically give people new capabilities. It’s like an exoskeleton for the mind that can amplify by that human spark within you, enable you to create and do more interesting things than ever before”.

    https://www.coursera.org/learn/generative-ai/lecture/jXJnz/the-achieve-framework-for-augmented-intelligence

    • Hi John thanks for your comment. Both Gary’s quote and the articles in today’s post represent different views about Ai and image creation. Some people see it as unleashing creativity and others as curbing it. I think it’s a good thing to see different perspectives on this issue and we leave it up to the readers to make up their own mind on this issue.

  13. I’ve been using image generation with Bing (based on Dall-E) to show visually why things may not be quite right with AI. I usually find I need to adapt the prompts, but even so the outputs always have different problems. It is a quick way to show that we may judge the outputs as being inaccurate (see David’s post above), unethical (clearly taking someone else’s intellectual property), unrepresentative of real situations, and phantasmagoric (ridiculous) but with text, we may not notice the flaws so obviously as we do with images. Great for starting discussions.

  14. I used Firefly to generate the first prompt – an image based on the role I have, very interesting that of the 4 images generated, 3 out of 4 were men, all white and with beards! One was of a woman, equally white – all have a university-type background which would tally with the spec but disappointed by the stereotype appearing.

  15. I am a lecturer in Fashion Textile Design and have previously dabbled with an AI image generator before. I was not impressed with the Artwork generated, it was interesting but really artificial . for the purposes of today’s task I accessed Adobe Firefly and typed into the text box
    Lecturer in Printed Textile Design
    the image was pretty good however it looked remarkably like a fellow colleague who is prolific on Social Media which made me laugh. the app is definitely scrapping data from these sites. As a woman of colour It didn’t reflect me and I would only use this AI to demonstrate how wrong they can be. Maybe I was too vague in my prompt?

  16. I’ve been using these tools for while and find creating photorealistic images of synthetic people quite troubling. I’m more comfortable with outputs that are not photos, such as a sketch or watercolour. Prompting for the kind of output wanted feels like stumbling in the dark, but does occasionally come up with something interesting. I think these could be useful in multimodal teaching when images might be added to an LMS or presentation, for example alongside text.
    Eyes are still a problem, but do seem to be getting better, as does some diversity in subjects, but that varies quite a bit between these platforms, and clearly has along way to go.
    Here are a few random images quickly made in Dall E without much prompting, and not looking like me. Trying to do that would have been easier if I uploaded a photo as a starting prompt, but not something I wish to do as it would probably end up in the data set.

    https://labs.openai.com/s/886zQhi0mr59sMgZLgb4IqZY
    https://labs.openai.com/s/5JebPQpzkfn2VKSKdBu2mNM8
    https://labs.openai.com/s/OpdwfuiWPPkV6KmBwhPvVQNN

    • For sure ,but might offer something as part of a process. I personally find stock photos immensely bland and boring and in an odd way appreciate some of the AI errors which can serendipitously appear. Pencils are usually just wrong.

  17. To be honest, I was a bit disappointed with the results of Firefly, although my job title of Academic Lead for Digital Teaching and Learning might be a bit too specific. Even after tweaking the prompt several times, the images that were generated were a bit childish or cartoonish and simply showed a teacher with a laptop. When I added that I was in an arts institution it gave me glasses, longish hair and a beard so certain stereotypes were certainly confirmed. Three of the four images were male so it seemed there might be some gender bias as well since unless it recognised my name there was nothing in my prompt to indicate or suggest gender.
    I think I would like to explore this in more detail with other AI platforms such as Midjourney to determine whether or not I like the concept. My initial reaction is that it could be a useful ‘scratchpad’ for working through your ideas and combining unrelated elements before the student adapts the results to their own needs or unique vision.

    • I’d agree a lot of the imagery tends towards cartoonish and fantastic. Output quality also varies between the tools, with Midjourney probably most refined.If you create a midjourney account which uses Discord, a sort of social media platform as the controlling interface, you are presented with an endless scroll of the images created at that moment. To my mind not exactly inspiring for the future of image making, especially for those who work with material artefacts

  18. It was very easy to start generating images with Adobe Firefly. DALLE asked me for credit so I didn’t pursue it.

    The quality of the images was very good. They could very easily pass as images of genuine people. However, I noticed that there was a bias towards white and cisgender looking features, especially when asking Firefly to create a portrait of me. For instance I described myself as non-binary and the images I got were of people with cisgender male features.

    Also other biases are present, including with body image and size, as well as age. To be more precise the images generated were of a slim person in their mid 20s to mid 30s. Some stereotypes were also present I.e. librarians depicted wearing formal wear and glasses, being surrounded by books or directly reading them.

    As great as these tools are, as an artist I am highly concerned about them driving people out of jobs, especially to do with photography and digital art and I am also concerned about ethics.

  19. This was fun to try! Dall-E wanted me to buy credits, but I was able to use Adobe Firefly after signing up with my work email address. A basic prompt based on my job title was not very effective – ‘digital skills trainer’ produced images where all the people were young and fit, and 3 out of 4 of them were using gym equipment! At least the 4th one had a laptop. 3 out of 4 were white and 3 out of 4 were men. A more detailed description of what I wanted the person to look like and be doing was much more effective. There seemed to be lots of options for customising the images – this is something I’ll look at another time.

  20. I found both platforms easy to use. I’ve tried using Canva’s AI image generator tool before and likewise with DALL-E and Firefly, I haven’t found many images that I like. I suspect this is more to do with my insufficient prompting then the tool itself. The images generated by Firefly were definitely preferable to those of DALL-E. DALL-E produced images were a bit strange and very ‘commercial’ whereas those produced by Firefly were a little more ‘casual’ and friendly. Overall though I didn’t really like any of them and certainly wouldn’t use them. The poor quality of the images did surprise me but as stated earlier, I’m sure the quality could be improved with better prompting. I generated images using the prompt create an image of a digital skills tutor working in a university and was pleasantly surprised that all images created by both platforms included a female. I was definitely expecting them to be male dominated. There were less bias / stereotyping in the images produced by Firefly which incorporated more diversity whereas those by DALL-E were all white female heavy.

    For me personally, I think the potential for using AI to create images is allowing for creativity rather than stifling it – if I improve my prompting! Sometimes I know what I want an image to look like but don’t have the technical skills to create it – AI could provide me with an opportunity to create this without these skills. However, I wouldn’t want to see it being used rather than reaching out to artists.

  21. I teach preschool gymnastics and I asked Firefly to create a logo for my team. 2 of the 4 images produced children with at least one extra limb. I was really hoping to find that it would be able to create labelled diagrams of human anatomy but I could not get anything close to what would be usable.

  22. In our paper presented at #ascilite23 in NZ this week, we used image generation as part of the whole exercise about AI PD. Sorting the coherence presentation we got the audience to select phrases from a poem and then plugged that into an image generator. It was fun to break the rules of iterative prompting and not have an expectation of what the image would be like. However we found that often we would tweak the image until we ‘liked’ it. This is the human element. https://doi.org/10.14742/apubs.2023.514

  23. This was my first time to use Firefly. With the prompt about my role at a British university, I got images featuring either while male or white female. Stereotypes (or common cores, from another perspective) about ethnicity, gender and profession are evident from the results after I kept refining my prompts. As I am in the professional services, the tool can be very useful to developing teaching materials and activities.

  24. I guess DALL-E’s not free any more, so used Bing image creator (built on DALL-E). I posted “Create an image of a digital learning designer working with the faculty of arts and social sciences in a university in UK” and got 4 rather boring images, 3 of male graphic artists, so I added “as a Christmas elf” and got 4 slightly more amusing images of male graphic artist elves working with computers in offices:
    https://padlet.com/hghodbane/12-days-of-ai-n5zr34co8gy1pd6c/wish/2813577919.
    Looks like there aren’t any female digital learning designer elves!

  25. I used Firefly as I had tried Dall-e before. I am also a learning technologist and got the same mix of genders as Penny (1 man, 3 women), they were all white, slim and had very long blonde hair. They all looked like catalogue models… which seems to be what I get when asking for any type of human without specifying physical characteristics.
    Out of curiosity, I tried the exact same prompt but added “ugly”. The results I got that time were definitely not ugly, it just seemed to age similar looking humans by 10-20 years… Charming!
    I’m not a fan of using these tools for generating “realistic” images as they do have issues with stereotype, bias and creating extra appendages! I’d rather use them for ethereal scenes, cartoons and illustrative work.

  26. I had a look at Dall-E 2 but needed credits to use it and didn’t buy any. Tried Clipdrop by Stable Diffusion but it didn’t generate an image, it wanted me to upgrade to a pro licence. Midjourney needed an account. I am retired so I don’t have access to any institutional accounts.
    With Starryai https://starryai.com/ I logged in with Google to a free account and had 3 credits. There seems to be five free daily credits or you can buy them.
    I did the first task ‘an image of myself based on my job’. I used a prompt to generate an art image, portrait style and then then repeated it with an illustration minimal figure sketch. I downloaded the two sets of images (4 in each set), both very interesting. All this took 1 credit.
    The four art images were ok, some better than others but not to my taste. One was definitely Japanese in influence. Three of the minimal figure sketches were ok, the third was inappropriate. I liked the first one as it looked old enough and had glasses as I do. The other two were too young. I would use the illustration I liked e.g. as a profile image or to support some work.
    Then I did task 2 ‘create an image of yourself based on your role in society’ and generated a photo image RealVisX, this used 2 credits. I didn’t like the photos generated, all seem to be real people and I deleted the images. I was out of credits to redo image generation. I may go back later, claim five free credits and do it again.
    I had a look at the FAQs e.g. who owns my creations. As expected, it says I do as long as I own anything I upload. I didn’t amend the prompts as it would require credits. I would have to explore this further to identify bias and stereotyping. I am not really a user of images so it would take me some time to explore this software but may well do so

    • Came back this morning to StarryAi to see if I had any free credits. None and no clear idea on how to claim them. A bit disappointing!

      • I found this on the StarryAI site: You get 5 free credits once each day at 5am Pacific Standard Time (PST).

        Looks like they don’t yet adapt to users in different time zones, so you’ll have to wait until after 1pm London time to try again.

  27. I have learned that, whilst gender and ethnically balanced, all graphic designers are young and all lecturer’s wear grey suits!

  28. This was my first time using firefly and I thought it was pretty straightforward to get a result. I asked it to ‘create an image of an employability and industry manager in a university for the creative arts in the UK’. He provided some high-quality images which were impressive and gave me for variations of what the result could be.

    Gave me four variations of women who were in business casual attire, in an educational setting with boards and markers and screens behind them et cetera. One of the images even had a flag of the Union Jack which tried to link it back to my prompt of an employability and industry manager being in the UK.

    The people in the image are from different diverse backgrounds and the ages were around mid 20’s to mid 30s. it provided linkedin styled images/headshots…

    I did have to go back and change the prompt to the following,

    Create an image of a male Sikh with black turban and short trimmed beard as an employability and industry manager in a university for the creative arts in the UK.

    it did a good job, but again it still doesn’t have enough knowledge of cultural specifics yet…

    I think there’s lots more room for improvement for the future.

  29. I’ve used Firefly previously to create abstract images, mostly artistic, for background and banner images. After a bit of time honing the prompt, am able to get pretty good results. But the prompts do require honing! Trying to get Firefly to produce a single image, for example, containing a number of students of a range of ages and ethic backgrounds is tricky. But this kind of GenAI is great when you have an eye for these things but don’t have the artistic skill to produce your own 🙂

    Doing the task and using ‘a learning technologist working in a united kingdom university’ as the prompt, the photo-realistic images are all of traditionally attractive white people in their 30s, so definitely an automatic bias there. Of course, further prompts would tailor that but the bias seems to be there automatically.

  30. I am not convinced about Firefly tbh. The images are so generic and lack any kind of creativity, to me they look like bad stock images from the early 90s, just updated with modern accents. I use Midjourney a lot, and maybe that’s why I struggle to see the point in other image generating tools. I used Dall-e for inpainting before MJ hopped on this bandwagon (you can do inpainting with ‘vary region’ now) , but again the lack of imagination is quite striking for what I am trying to produce. I also tried freepik and krea.ai, not convinced either but haven’t given those two a proper shot I guess. I think it really depends what kind of outcome you are trying to achieve and also any of these tools need more than 20mins quick prompting. It is almost like learning how the machine thinks and then throw some curveballs to get good results. When you use a tool every day for a long time you find your way interacting with it and generative image ai is no different. When it get really interesting is when you use ‘tuner’ in MJ, tuning the prompt to create and recreate the style that you tuned, to get consistency. And also control net and stable diffusion, which is a mind blowing tool, but requires a lot of time and effort, and is definitely no quick fix. Amazing results however. Like any digital tool, you need to put the time and energy into it to get the best results, and lazy prompting and quick fixes/very easy apps will give you results, just not very good ones 🙂 imho.

  31. I got a racially mixed class and a mix of racial types for the ‘lecturer’ when I asked for a racially mixed class. Three of the choices (of four) were women), which was alright, as I did not stipulate male. None reflected the ‘maturer’ age-group in my department, and I’m afraid the word ‘perky’ comes to mind.

  32. The following post was created entirely by AI. It took the comments from Day Three of the 12 Days of AI on AI and Image Creation (up to 7th Dec.) and copied them into a Word document and then asked Claude to summarise them for me. I then put the summary into ChatGPT and asked it to create a blog for me. This is what I got:

    https://totallyrewired.wordpress.com/2023/12/07/exploring-the-frontier-of-ai-image-generation-a-mixed-bag-of-innovation-and-challenge/

  33. I have tried Adobe Firefly as well as Dall-e (through Bing Chat Enterprise) and I agree that image generation AI-tools are not as developed as their text counterparts. To counteract biases, I am specific and provide gender, ethnicity, age etc. in the image prompt.

    Btw, Chris, the AI generated summaries which are then turned to blogposts by AI are quite impressive!

  34. My university has no subscription to Firefly, and DALL-E wanted me to buy credits. So I used StarryAI instead. The 60-year old professor was realistic, although always white. My next prompt to create a 60-year old punk singing loudly to a crowd of chocolate Christmas trees was fun, but the problem with creating realistic hands was apparent; the prompt with the chocolate trees was largely ignored. It worked much better with Microsoft Bing, the hands were much more realistic, and I got my chocolate trees.

  35. The image generation was, like most other aspects of AI I have tried so far, pretty underwhelming. I prompted Firefly as follows: ‘Create an image of a senior white woman lecturing at a university’. I got the senior white woman, but she was standing in the middle of what looked like a community college class, also for seniors (which I didn’t ask for, but I had thought ‘university’ would prompt mainly young students). She looked so ecstatic I wondered what drugs she was taking, and where I could get some before my next lecture.

    We have a lot to fear from AI, but a lot of this kind of stuff is just computer-generated rubbish, with no risk of replacing the creativity of human beings – still, I suppose it’s early days.

Leave a Reply

Your email address will not be published. Required fields are marked *

*