Posted by Andrew Crockett

This week President Andrew, in his 94th and final President’s Post, reports on today’s Enterprise Forum by Zoom and reflects on the moral challenges posed by the development of Artificial Intelligence (AI).

The Last Post 27 June 2023

Today, the Vocational Service Committee presented its third Enterprise Forum for the year at which submariner Commodore (Ret) Peter Scott spoke about his book ‘Running Deep’, and what he learned as a submariner about leadership, resilience, and connection. He also provided insights into a working life spent underwater, the role of submarines in shaping geostrategic conditions in our region, and Australia’s next generation of nuclear-powered submarines. See Gordon Cheyne’s report on the talk below.

The three Enterprise Forums the Club has hosted in 2023/24 have all been excellent with speakers enlightening us about three important areas of modern enterprise – the exploration of space, the detection and prevention of cybercrime, and Australia’s critical strategic deterrent, its submarine fleet. 

Thanks to Vocational Service Director Vincent Chen and his team for organising these forums. 

 

District Changeover 

The District Changeover lunch was held last Saturday and the Club was represented by President-elect Doug McLean, Club Service Director Terry Kitchen, President-nominee Dorothy Gilmour, and me. 

I’m delighted and proud to announce that the Club’s Rotary SAFE Families (RSF) project won a District ‘People of Action Award’ for empowering people to Recognise the signs of abuse in all its forms, Raise their concerns safely with victims, and Refer victims to appropriate support services (the three ‘Rs’).

Rotary SAFE Families was established in 2018 to help stop all forms of family violence by addressing its underlying causes.  It has evolved into a national program with a network of RSF ‘ambassadors’ in Rotary clubs in Victoria and interstate. Its website - https://rotarysafefamilies.org.au/ - contains many useful resources including short films, a manual, and translated information to assist Rotarians and others around Australia to play their part in stopping family abuse in all its forms.

Congratulations to co- founder and director of Rotary SAFE Families, Dorothy Gilmour, and her hardworking support team.

 

Zoom donations

Thank you to those who donated to the Club’s project funds when registering for today’s Enterprise Forum.  Today’s donations of $350 brought total donations at Club Zoom events in 2022/23 to $1,061.

 

Rochester appeal

The Club recently joined with residents of Sackville Grange to donate a combined total of $6,000 to purchase much-needed items for the 250 or so Rochester flood victims who are facing a winter in caravans and sheds. 

President-elect Doug McLean and Club Service Director Terry Kitchen travelled to Rochester last week to see for themselves the conditions people displaced by the floods are living under, and to assess what goods and equipment were most needed at this stage of the recovery efforts. See Doug’s report on the visit below. 

 

The Fixers tackle the UN

The Club’s enthusiastic fixers of the world’s problems met last night to tackle the question of how to reform the United Nations so that it can reach its full potential as a forum for securing international peace and security, and the promotion of the well-being of all the peoples of the world. See Lawrence Reddaway’s take on the Fixers’ discussion below.

 

Signing off

Before I hand over to President-elect Doug McLean in three days’ time, I’d like to thank all those members of the Club who over the past two years have offered me many words of encouragement and whose support has sustained me in the role of President. 

When I joined Rotary in 2019, I chose Rotary Hawthorn as my club not only because it is such as friendly and welcoming club, but also because it runs local and overseas service projects that make a real difference to people’s lives.  This year Rotary Hawthorn celebrates 70 years of outstanding service to those who are less fortunate than us, and I am privileged to have been able to serve as its President for two of those years.

I wish Doug and his team the very best for 2023/24, and when Pam and I return from overseas in mid-August I look forward to continuing to serve on the Board as Vice-President and Club Secretary.

 

Next Club meeting

The next Club meeting will be at Kooyong on Tuesday 4 July when our speaker will be new President Doug, who will tell us all about the ‘President Behind the Badge’.

Next week is also the start of our winter campaign to support Camcare’s Emergency Food Relief program so please bring along your donations of non-perishable foods to that meeting.

Au revoir,

 

Thought for the Week

His being my last Post and final ‘Thought for the Week’, I thought I’d take a glimpse into the near future and reflect on the ethical challenges of Artificial Intelligence (AI).

AI has been much in the news recently, with some experts making dire predictions that if the development and application of AI is not sufficiently controlled, then it could pose a threat to the future of humanity.

AI can deliver benefits, such as improved healthcare; safer and cleaner transport; more efficient manufacturing, and cheaper and more sustainable energy. But it could also be applied in harmful ways such as to promulgate misinformation, discriminate against minorities, and create and deliver autonomous weapons of mass destruction.

To successfully regulate AI globally and curb potentially harmful applications, the development and use of AI will need to be regulated by strong and globally enforceable rules. These rules will inevitably have to be based on ethical principles of universal application.

Given that over the past few weeks we have been considering how four mainstream ethical theories can inform the application of Rotary’s Four-way Test, I thought it might be of interest to consider how these theories might be used to develop a set of ethical principles for AI.

One of the earliest attempts to develop an ethical framework relevant to AI was the ‘Three Laws of Robotics’, a hierarchy of rules developed by biochemist and science fiction author Isaac Asimov.[1]  The Three Laws of Robotics are:[2]

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

Clearly evident in the First Law are the Kantian principles of respect for individual autonomy, and not using humans as a means to an end but as an end in themselves (Autonomy), and ethics of care principles of care and protection of the weak (Care). On the other hand, a consequentialist approach could be used to justify causing harm when necessary to avoid greater harm to others (Utility). This might, for example, justify the use of autonomous weapons in wartime.

The Second and Third Laws are essentially ‘machinery provisions’, the effect of which is to preserve the paramountcy of the First Law, rather than themselves being ethical in nature. 

The core ethical value of the Three Laws of Robotics is therefore the prevention of harm to humans. This invites comparison with two basic principles of medical ethics:

·      Autonomy, which is respect for an individual’s right to make decisions on their own behalf free from coercion, and

·      Non-maleficence, which is the duty to do no harm.

Asimov’s Laws have been criticised for assuming that moral decisions can be made by using algorithms, which is too simplistic given that more complex ethical issues cannot be resolved with a simple ‘yes’ or ‘no’ answer. The application of the Laws also depends on what meaning is given to key terms such as ‘humans’ and ‘harm’. For example, in one of Asimov’s stories, robots are made to follow the Laws, but they are programmed with a meaning of ‘human’ that excludes certain groups and thus permits genocide.

So, while one cannot argue with Asimov’s core value of doing no harm to humans, a more sophisticated approach to regulation is needed if AI is to be applied for the greater good of humanity. 

A more recent attempt to come up with a comprehensive set of AI principles was made at the Asilomar Conference on Beneficial AI organised by the Future of Life Institute in California in 2017.[3] The Asilomar principles were developed with the goal of ensuring that AI research should be to create beneficial intelligence, not undirected intelligence.

In the following summary of the Asilomar principles the primary ethical value underpinning each principle is in brackets.

 AI systems should:

·      remain under human direction and control (Autonomy);

·      be compatible with ideals of human dignity, rights, freedoms, and cultural diversity (Autonomy);

·      benefit and empower as many people as possible (Utility);

·      enable individuals to access, manage and control the data they generate (Autonomy);

·      share the economic prosperity created for the benefit of all humanity (Utility);

·      improve the social and civic processes on which the health of society depends (Care); 

·      not unreasonably curtail people’s liberty (Autonomy), and

·      not lead to an arms race in lethal autonomous weapons (Utility).

 

Countries across the globe are beginning to take the threat of AI seriously and 37 AI-related bills were passed into law globally in 2022.[4]

Earlier this month, the European Parliament, the legislative branch of the European Union, passed a draft law known as the A.I. Act, which will impose restrictions on the riskiest uses of AI such as facial recognition software, and require transparency of data used to create AI systems like ChatGPT.[5]

The Australian Government is currently considering whether to adopt AI risk classifications like those being developed in Canada and the EU.

 

[1]  https://en.wikipedia.org/wiki/Isaac_Asimov

[2] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

[3] https://www.techopedia.com/exploring-the-asilomar-ai-principles-a-guide-to-ensuring-safe-and-beneficial-ai-development

[4] Stanford University's 2023 AI Index, https://www.weforum.org/agenda/2023/05/top-story-plus-other-ai-stories-to-read-this-month/

[5] https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html