AI

AI ... and the rhinoceros in the room redux

Understanding the discussion about AI benefits from hands-on experience of tools and services.
Lorcan 6 min read
AI ... and the rhinoceros in the room redux

We are becoming familiar with a new vocabulary and new practices: prompt engineering, image generation, hallucination, and so on. In my last post I noted how AI can be weird (to use Ethan Mollick's term for it) because it may not always behave as expected, and can surprise. The structure and content of instructions can influence responses in different ways. I made the point that experience was important to understanding:

Because of this, it is important to build up some tacit knowledge of how these tools behave, and how they can surprise. Understanding the potential and limitations of AI tools depends on use, and on developing some experiential understanding of behaviours. To explain the difference between Perplexity.ai and ChatGPT one needs to have tried them out. To see the impact of prompt engineering techniques it is helpful to have tried different approaches and seen the quite different results one can achieve. And also to see different results across services. // Generative AI and libraries: 7 contexts

So, again, the best way to understand hallucination is to see actual examples of it. The best way to get a sense of why prompting is important is to try ChatGPT or one of the other tools and to see how structuring the prompt in particular ways or adding elements can change the outputs. To understand how search and AI may develop it is useful to actually try out perplexity.ai or Bing Chat (now rebranded as Copilot) to see the interplay between web search and AI. And so on.

Crafting an effective prompt is both an art and a science. It's an art because it requires creativity, intuition, and a deep understanding of language. It's a science because it's grounded in the mechanics of how AI models process and generate responses. // Datacamp

Of course, experience alone cannot make you an expert in how LLMs work, although conversely knowing how they work certainly improves your interaction. Nor will experience alone reveal the ways in which dominant historical perspectives or attitudes influence the results, or which results may be fabricated. (An LLM is a Large Language Model, which is what powers ChatGPT and other services. For more about these, including some developed in the cultural and scholarly domains, see the longer AI posts linked below.)

In that context, I was rather struck by the binary contrast between techno-optimism and strong societal risk posed in the coverage of the somewhat depressing OpenAI board/CEO episode. Discussion of the current state of AI mostly ignored the pressing concerns around current issues and possible remediations (to do with composition of training data, documentation, hidden labor, mitigation of bias, and so on). (See the DAIR publications for examples of issues.)

In my last post, I discussed how understanding these issues was an important part of library work, related to new research skills, policy input, advocacy and other library roles. And also emphasized the need for staff support. Here I am interested in the specific point that understanding some of the general AI discussion benefits from experience in using tools and services.

It is helpful to try out services in areas where you have personal knowledge or expertise, so as to be able to weigh and assess. For example, when looking at a service I sometimes ask questions about Irish diaspora populations in the UK and the US, and differences in perceptions, influence, and so on. It is interesting to see variability across services, and again how prompting can guide responses. One can see an occasional leaning into stereotypes but also the ability to note the existence of stereotypes. Of course, this is very impressionistic - it will be interesting to see more research work on the cultural and social attitudes embedded in LLMs.

Discussion about the library itself is an interesting case. In my inexpert experience it has required quite a bit of prompting to move LLMs away from very stereotypical views of the library and librarians, based presumably on dominant public perceptions in training sets. This suggests in turn how much work we have to do to change the perception of the library in the public record that feeds the LLMs.

This suggests in turn how much work we have to do to change the perception of the library in the public record that feeds the LLMs.

I did not discuss image generation much in my earlier AI posts (links below) because I had not used them very much until recently. I have been looking a little more at DALL·E. The first image below is the response to a very straightforward and simple prompt 'generate an image of a library.' It certainly leans into a classical view of what a library is, although I was interested to see the fire! I then iteratively tried to move it from a library configured around collections to a library configured around learning, research and social interaction. I did not spend too long with it, and would not use any of the results. However, certain library tropes remain strong. My requests for social learning spaces with whiteboards didn't seem to have much influence. I was amused to see that 'group work' has been transformed somewhat in the wall art in the third picture!

Ongoing adoption and media coverage means that AI usage will increasingly have a shared vocabulary and experience. Familiarity with this will be important in work settings, and will likely be part of the general social background.

In this context, I was reminded of a passage from Steven Johnson’s Everything bad is good for you. He is talking about gaming, but it also seemed relevant to our current moment. I wrote about this some years ago.

I worry about the experiential gap between people who have immersed themselves in games, and people who have only heard secondhand reports, because the gap makes it difficult to discuss the meaning of games in a coherent way. It reminds me of the way the social critic Jane Jacobs felt about the thriving urban neighborhoods she documented in the sixties: “People who know well such animated city streets will know how it is. People who do not will always have it a little wrong in their heads – like the old prints of rhinoceroses made from travelers’ descriptions of the rhinoceroses.”

As I remarked then, the Jane Jacobs analogy is striking and suggestive. For somebody who has not internalized the experience, but relies on reading or conversation, it is possible that they may have it a ‘little wrong' and miss the meaning.

The same is now true of AI. Understanding and participating in the conversation benefits from hands-on experience.

Note: For the moment anyway, Microsoft is making the premium GPT-4 and image generation tool DALL·E 3 available to anyone with a Microsoft account through the Microsoft Copilot. This provides similar functionality to the paid version of ChatGPT.

Acknowledgement: Thanks to Christina Rodriques and Chance Hunt for generously providing helpful comments on a draft.

Picture Credit: Rhinoceros by David Kandel. Published in the 1598 edition of Sebastian Münster's Cosmographia. Available on Wikimedia Commons under the Creative Commons Attribution-Share Alike 2.0 Generic license.

Related posts: See my series of longer posts for a fuller discussion of AI. I plan a fourth in the series at some stage looking at potential impact on library services.

Generative AI and libraries: 7 contexts
Libraries are engaging with AI in their educational, service and policy work. This post discusses seven contexts in which that work is taking place.
Generative AI, scholarly and cultural language models, and the return of content
AI momentum continues to grow, and we are seeing more applications in the scholarly and cultural spaces. Several organizations have been creating specialist large language models based on reservoirs of curated scientific and cultural data.
Generative AI and large language models: background and contexts
The promise and challenge of Generative AI is now central. This is a summary overview of some of the major directions and issues, acknowledging that things are moving very quickly. I intend it as background to later posts about library implications and developments.
Share
Comments
More from LorcanDempsey.net
AI, OpenAI and the evolving AI environment
AI

AI, OpenAI and the evolving AI environment

For a few days recently, the news was dominated by reports of organizational upheaval at OpenAI. Here are some reflections on what happened, with some connections to library topics.
Lorcan 12 min read
icon

Lorcan Dempsey dot net

Deep dives and quick takes: libraries, society, culture and technology

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to LorcanDempsey.net.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.