세계 / Global

Geoffrey Hinton, AI, and Google’s Ethics Problem

Geoffrey Hinton, AI, and Google’s Ethics Problem

Geoffrey Hinton

Written by Dr. Binoy Kampmark 

Talk about the dangers of artificial intelligence, actual or imagined, has become feverish, much of it induced by the growing world of generative chat bots.  When scrutinising the critics, attention should be paid to their motivations.  What do they stand to gain from adopting a particular stance?  In the case of Geoffrey Hinton, immodestly seen as the “Godfather of AI”, the scrutiny levelled should be sharper than most.

Hinton hails from the “connectionist” school of thinking in AI, the once discredited field that envisages neural networks which mimic the human brain and, more broadly, human behaviour.  Such a view is at odds with the “symbolists”, who focus on AI as machine-governed, the preserve of specific symbols and rules.

John Thornhill, writing for the Financial Times, notes Hinton’s rise, along with other members of the connectionist tribe:  “As computers became more powerful, data sets exploded in size, and algorithms became more sophisticated, deep learning researchers, such as Hinton, were able to produce ever more impressive results that could no longer be ignored by the mainstream AI community.”

In time, deep learning systems became all the rage, and the world of big tech sought out such names as Hinton’s.  He, along with his colleagues, came to command absurd salaries at the summits of Google, Facebook, Amazon and Microsoft.  At Google, Hinton served as vice president and engineering fellow.

Hinton’s departure from Google, and more specifically his role as head of the Google Brain team, got the wheel of speculation whirring.  One line of thinking was that it took place so that he could criticise the very company whose very achievements he has aided over the years.  It was certainly a bit rich, given Hinton’s own role in pushing the cart of generative AI.  In 2012, he pioneered a self-training neural network capable of identifying common objects in pictures with considerable accuracy.

The timing is also of interest.  Just over a month prior, an open letter was published by the Future of Life Institute warning of the terrible effects of AI beyond the wickedness of OpenAI’s GPT-4 and other cognate systems.  A number of questions were posed: “Should we let machines flood our information channels with propaganda and untruth?  Should we automate away all the jobs, including the fulfilling ones?  Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?  Should we risk loss of control of our civilization?

In calling for a six-month pause on developing such large-scale AI projects, the letter attracted a number of names that somewhat diminished the value of the warnings; many signatories had, after all, played a far from negligible role in creating automation, obsolescence and the encouraging the “loss of control of our civilization”.  To that end, when the likes of Elon Musk and Steve Wozniak append their signatures to a project calling for a pause in technological developments, bullshit detectors the world over should stir.

The same principles should apply to Hinton.  He is obviously seeking other pastures, and in so doing, preening himself with some heavy self-promotion.  This takes the form of mild condemnation of the very thing he was responsible for creating.  “The idea that this stuff could actually get smarter than people – a few people believed that.  But most people thought it was way off.  And I thought it was way off. […] Obviously, I no longer think that.”  He, you would think, should know better than most.

On Twitter, Hinton put to bed any suggestions that he was leaving Google on a sour note, or that he had any intention of dumping on its operations.  “In the NYT today, Cade Metz implies that I left Google so that I could criticize Google.  Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google.  Google has acted very responsibly.”

This somewhat bizarre form of reasoning suggests that any criticism of AI will exist independently of the very companies that develop and profit from such projects, all the while leaving the developers – like Hinton – immune from any accusations of complicity.  The fact that he seemed incapable of developing critiques of AI or suggest regulatory frameworks within Google itself, undercuts the sincerity of the move.

In reacting to his long time colleague’s departure, Jeff Dean, chief scientist and head of Google DeepMind, also revealed that the waters remained calm, much to everyone’s satisfaction.  “Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions to Google […] As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI.  We’re continually learning to understand emerging risks while also innovating boldly.”

A number in the AI community did sense that something else was afoot.  Computer scientist Roman Yampolskiy, in responding to Hinton’s remarks, pertinently observed that concerns for AI Safety were not mutually exclusive to research within the organisation – nor should they be. “We should normalize being concerned with AI Safety without having to quit your [sic] job as an AI researcher.”

Google certainly has what might be called an ethics problem when it comes to AI development.  The organisation has been rather keen to muzzle internal discussions on the subject.  Margaret Mitchell, formerly of Google’s Ethical AI team, which she co-founded in 2017, was given the heave-ho after conducting an internal inquiry into the dismissal of Timnit Gebru, who had been a member of the same team.

Gebru was scalped in December 2020 after co-authoring work that took issue with the dangers arising from using AI trained and gorged on huge amounts of data.  Both Gebru and Mitchell have also been critical about the conspicuous lack of diversity in the field, described by the latter as a “sea of dudes”.

As for Hinton’s own philosophical dilemmas, they are far from sophisticated and unlikely to trouble his sleep.  Whatever Frankenstein role he played in the creation of the very monster he now warns of, his sleep is unlikely to be troubled.  “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton explained to the New York Times.  “It is hard to see how you can prevent the bad actors from using it for bad things.”

Dr. Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge.  He currently lectures at RMIT University.  Email: bkampmark@gmail.com

MORE ON THE TOPIC:

The post Geoffrey Hinton, AI, and Google’s Ethics Problem appeared first on South Front.

0 Comments
BYC 남성 심플 베이직 단색 민소매런닝 DOLE1002
입고 벗기 편한 주니어 브라런닝 팬티 세트 0425sy
7DAYS 남아여아 골지 단색 중목 양말 4켤레세트205978
에스미 밍크 무지 부츠컷 팬츠 SD-231022
가정용 미니 초음파 세척기 안경 귀금속
HP 335W USB 3.2 Flash Drives 휴대용 저장장치 USB 메모리 드라이브 128GB
멀티 USB 3.1 카드리더기 9723TC-OTG NEXTU
Linkvu 코일리 투톤 배색 Type-C 데이터 충전 길이조절 케이블 120W USB C to C
암막커튼 210 중문가림막 천 창문가리개 주방패브릭 바란스 공간분리 현관가림막현관문간이
LED 전구 크리스마스 미니 트리 나무 15X40cm 오브제
LED1000구검정선USB지네전구25m리모컨포함
무보링 댐퍼 경첩 4p세트 무타공 인도어 장롱 경첩
웅진 빅토리아 탄산수 자몽 500ml X 20개입
다농원 누룽지둥굴레차 1.5gx100T
닥터페퍼 제로 355ml 12입
롯데 몽쉘통통 오리지널 실속팩 816g 24p

프린세스 캐치티니핑 시즌6 분장놀이 아름핑 10000
칠성상회
새콤달콤 캐치티니핑 시즌4 티니핑 분장놀이 3
칠성상회

맨위로↑