Insights

Q&A with Dr. Saul Robinson: Embracing the Power of AI

Q&A with Dr. Saul Robinson: Embracing the Power of AI

Date: 21 July 2023

On June 28, 2023, Assurity hosted an engaging webinar featuring presentations by Dr. Saul Robinson, Russell Ewart, and Chris Pollard.

The event garnered a highly interactive audience, and although the webinar was time-constrained, numerous questions posed during the session unfortunately went unanswered.

However, this article aims to rectify that by compiling all the answers provided by Dr. Saul Robinson to the questions but not answered raised during the webinar.

Q) Is there a way to augment chatGPT with our own data but keep it separate?

Strictly, there is no available service from OpenAI to customize their proprietary models. However, many models are openly available at HuggingFace (derived from Meta’s LLaMa) that have similar performance to chatGPT and can be fine-tuned on your own data.

Q) What kinds of skills and capability would a business need to have on a payroll or brought in to train a model? (e.g. 3D.laz and imagery object recognition)

In the general sense, putting AI models into production requires a keen understanding of the organizational objectives and skills in both AI research and engineering.

Using lidar (laser direct and range) point cloud data to classify objects is bringing together several skills at a high level namely, image processing, deep learning (there are some effective machine learning approaches as well), and experience in your field. I’d recommend bringing in some expertise to help determine the best strategy if that’s not something that you have already.

Esri has several videos discussing what is possible in this area.

Q) What do you think of the EU AI act? Will it be effective? Is it targeting the right areas?

The EU AI Act is a huge step forward. Understanding and bracketing risk is a key function of government, particularly where understanding that risk is not self-evident to the layman. It is both practical and well considered as a basis for future regulation.

In a government agency, how do we monitor the usage of AI and measure its value, whether positive or negative, to the organisation and our customers?

Given the availability of LLMs, a permissive approach may be best, so that it is actually possible to measure its usage.

Value is difficult to quantify objectively prior to implementation. This is particularly the case where processes are complex and have inter-dependencies. The research paper AI Adoption in Healthcare is an accessible place to start and comprehensive.

Q) What is your position on the petition and open letter signed by leading AI experts that calls for the pause in lab AI development and training?

The position of the letter is laudable. However, I don’t see it as being a tenable one. I also think that it misses an important point.

It’s basic human nature to try new things, even if only to confirm what is clearly a poor idea. Whereas, AI clearly has merit and the models/hardware are publicly available–the trial and error approach is inevitable and will continue unabated. What the letter misses–or at least fails to emphasize–is how we can regulate what AI can “do” right now. In line with the EU AI Act, we should be prohibiting the unfettered use of AI in high risk applications.

Q) Do you feel the bias is accentuated by the fact that its main training corpus was English, so carries that bias based on a cultural perspective?

Yes, there is both a cultural perspective and logical approach carried by the principle use of English. Language and culture are intertwined. I would also point out that our culture is based on our lived collective experience. Implicitly, this means that our culture is “now” rather than what we said or did five years ago. But the data being used to train LLMs is historical in this cultural sense.

Q) What is the new hardware for generative AI? Exp Nvidia is working on a chip for generative AI. Will this be similar to how graphic cards work with AI?

The Nvidia work appears to be primarily focused on process optimization. In many AI workflows there are bottlenecks related to moving data and memory availability between computational steps. Nvidia’s new Grace Hopper system overcomes many of these restrictions, but fundamentally does the same calculations in the same manner as existing GPUs.

Q) Does that mean that it just gets smarter over time as it gathers and improves its answers?

This is true for many, but only to a certain degree. The “more data” approach quickly reaches saturation. If this weren’t the case, Amazon would not still be suggesting that we need to buy one-time items repeatedly. In the case of LLMs, there is also the risk of destructive feedback when consuming generated data. This is highlighted in the key quote from the paper Training on generated data “We find that use of model-generated content in training causes irreversible defects in the resulting models”.

Q) What will be the effects caused by plugins powered by AI? Specifically, plugins implemented in tools we are already using on a daily basis?

To date, there is little research on how such tools change people’s behavior. However, I would suspect that there will be a tendency for people to follow the suggestion for the given task. In this way, output will be homogenized without any guarantee of quality. Keeping track of plugin usage and overall performance becomes particularly important, as suggestion can become a business impacting practice without the normally slow process of normalization and diffusion. Much like we have “viral trends” in social media, the same could happen with business and in a destructive way.

Q) I am new to the world of AI. Besides chatGPT, which other AI model would you recommend? And is there an online AI community that I can join?

ChatGPT, Bing GPT, and GPT4 (all OpenAI/Microsoft) are the industry leaders. Google has Bard, but it’s currently way behind.

Two minute papers on youtube are good for keeping abreast of the recent developments in an easily digestible format.

Q) How quickly do you think AI memory will increase? What are the limiting factors for that cap?

With the introduction of Nvidia’s Grace Hopper system, the combined GPU memory limit for a model has increased from 320GB to 144TB. The limitations are engineering ones, rather than physics at this point, but we are fast approaching that.

Q) Given the example of an airplane autopilot, and that it is never to be trusted to fly unattended by humans, I am wondering what Saul thinks about self-driving cars, and whether they will ever become a reality in a truly driverless sense, or if there will always be some level of focused, unimpaired human supervision required. I’m also wondering, if human supervision will always be required, how we would keep the humans on the job, as, once the car is mostly driving itself, I’d think it would become even more tempting for the human to get distracted by their phone or take a nap.

I love this question. People perform wonderfully when they are neither overly stressed nor bored. They are also perfectly happy to do the same task for long periods when in this condition of “flow”. If we can keep people in this flow state then they’re happy to be performing the job and doing it well. So rather than having AI doing the majority of a job–and thus making people feel like napping is the best use of their time–I find the approach of using AI to help people maintain flow to be the most endearing.

I would describe self-driving cars as having a “98% problem”. It’s straightforward–given time and money–to build a self-driving car that 98% of the time will drive significantly better than a human. But for that remaining 2%, it will make undeniably terrible decisions that no human would even consider. We’ve been promised self-driving since 2015, but that 2%–that people can handle so effortlessly–is surprisingly stubborn.

ADDITIONAL RESOURCES:

  • Whitepaper authored by Dr. Saul Robinson ‘Embracing AI’ – To download, click here.
  • Have you missed the webinar? Watch the recording here.
Share Article

Want to know more?

Want to be inspired?

Want to learn?

Want to get in touch?

Share on Facebook
Share on LinkedIn
Share on Instagram
Follow on YouTube