세계 / Global

Geoffrey Hinton, AI, and Google’s Ethics Problem

Geoffrey Hinton, AI, and Google’s Ethics Problem

Geoffrey Hinton

Written by Dr. Binoy Kampmark 

Talk about the dangers of artificial intelligence, actual or imagined, has become feverish, much of it induced by the growing world of generative chat bots.  When scrutinising the critics, attention should be paid to their motivations.  What do they stand to gain from adopting a particular stance?  In the case of Geoffrey Hinton, immodestly seen as the “Godfather of AI”, the scrutiny levelled should be sharper than most.

Hinton hails from the “connectionist” school of thinking in AI, the once discredited field that envisages neural networks which mimic the human brain and, more broadly, human behaviour.  Such a view is at odds with the “symbolists”, who focus on AI as machine-governed, the preserve of specific symbols and rules.

John Thornhill, writing for the Financial Times, notes Hinton’s rise, along with other members of the connectionist tribe:  “As computers became more powerful, data sets exploded in size, and algorithms became more sophisticated, deep learning researchers, such as Hinton, were able to produce ever more impressive results that could no longer be ignored by the mainstream AI community.”

In time, deep learning systems became all the rage, and the world of big tech sought out such names as Hinton’s.  He, along with his colleagues, came to command absurd salaries at the summits of Google, Facebook, Amazon and Microsoft.  At Google, Hinton served as vice president and engineering fellow.

Hinton’s departure from Google, and more specifically his role as head of the Google Brain team, got the wheel of speculation whirring.  One line of thinking was that it took place so that he could criticise the very company whose very achievements he has aided over the years.  It was certainly a bit rich, given Hinton’s own role in pushing the cart of generative AI.  In 2012, he pioneered a self-training neural network capable of identifying common objects in pictures with considerable accuracy.

The timing is also of interest.  Just over a month prior, an open letter was published by the Future of Life Institute warning of the terrible effects of AI beyond the wickedness of OpenAI’s GPT-4 and other cognate systems.  A number of questions were posed: “Should we let machines flood our information channels with propaganda and untruth?  Should we automate away all the jobs, including the fulfilling ones?  Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?  Should we risk loss of control of our civilization?

In calling for a six-month pause on developing such large-scale AI projects, the letter attracted a number of names that somewhat diminished the value of the warnings; many signatories had, after all, played a far from negligible role in creating automation, obsolescence and the encouraging the “loss of control of our civilization”.  To that end, when the likes of Elon Musk and Steve Wozniak append their signatures to a project calling for a pause in technological developments, bullshit detectors the world over should stir.

The same principles should apply to Hinton.  He is obviously seeking other pastures, and in so doing, preening himself with some heavy self-promotion.  This takes the form of mild condemnation of the very thing he was responsible for creating.  “The idea that this stuff could actually get smarter than people – a few people believed that.  But most people thought it was way off.  And I thought it was way off. […] Obviously, I no longer think that.”  He, you would think, should know better than most.

On Twitter, Hinton put to bed any suggestions that he was leaving Google on a sour note, or that he had any intention of dumping on its operations.  “In the NYT today, Cade Metz implies that I left Google so that I could criticize Google.  Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google.  Google has acted very responsibly.”

This somewhat bizarre form of reasoning suggests that any criticism of AI will exist independently of the very companies that develop and profit from such projects, all the while leaving the developers – like Hinton – immune from any accusations of complicity.  The fact that he seemed incapable of developing critiques of AI or suggest regulatory frameworks within Google itself, undercuts the sincerity of the move.

In reacting to his long time colleague’s departure, Jeff Dean, chief scientist and head of Google DeepMind, also revealed that the waters remained calm, much to everyone’s satisfaction.  “Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions to Google […] As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI.  We’re continually learning to understand emerging risks while also innovating boldly.”

A number in the AI community did sense that something else was afoot.  Computer scientist Roman Yampolskiy, in responding to Hinton’s remarks, pertinently observed that concerns for AI Safety were not mutually exclusive to research within the organisation – nor should they be. “We should normalize being concerned with AI Safety without having to quit your [sic] job as an AI researcher.”

Google certainly has what might be called an ethics problem when it comes to AI development.  The organisation has been rather keen to muzzle internal discussions on the subject.  Margaret Mitchell, formerly of Google’s Ethical AI team, which she co-founded in 2017, was given the heave-ho after conducting an internal inquiry into the dismissal of Timnit Gebru, who had been a member of the same team.

Gebru was scalped in December 2020 after co-authoring work that took issue with the dangers arising from using AI trained and gorged on huge amounts of data.  Both Gebru and Mitchell have also been critical about the conspicuous lack of diversity in the field, described by the latter as a “sea of dudes”.

As for Hinton’s own philosophical dilemmas, they are far from sophisticated and unlikely to trouble his sleep.  Whatever Frankenstein role he played in the creation of the very monster he now warns of, his sleep is unlikely to be troubled.  “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton explained to the New York Times.  “It is hard to see how you can prevent the bad actors from using it for bad things.”

Dr. Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge.  He currently lectures at RMIT University.  Email: bkampmark@gmail.com

MORE ON THE TOPIC:

The post Geoffrey Hinton, AI, and Google’s Ethics Problem appeared first on South Front.

0 Comments
다용도 부직포 주머니 중형 32x38
업스타일 컬 올림머리 헤어밴드 SD-230336
아이스 7부 밴딩팬츠 3color 여름 밴딩 7부바지 밴딩 남성 냉장고 쿨링팬츠 냉동고 여름
아이스 마스크 풀페이스 여름 얇은 썬마스크 DD-12792
UTP 랜 케이블 1m 네트워크연결 인터넷랜선 랜선 네트워크구축 랜선연장
소니 NP-BX1 호환 LCD 듀얼 충전기 C타입5핀겸용
자광 콧털제거기 6512-고급코털깍기/코털정리기
갤럭시탭S9/S8/S7 전용 스타일러스 컬러볼펜 실리콘 펜슬 케이스
무타공 면도기 스텐 거치대 걸이
철제 더블 옷장 무타공 튼튼한 스탠드 옷걸이 행거
화장품 수납정리대 파우더룸 메이크업 브러쉬
철제 스탠드 옷걸이 행거 2단 높은 DIY 인테리어 헹거
일조 장미 옷걸이 2p
FINE 천연펄프 드립필터 5-6인용 100매 커피여과지
페인트붓 평 붓 25mm 평솔 브러시 솔브러쉬
스마토 카라비너 KA660-D 녹색 10개

피에르가르뎅)리브라 만년필(PC3400FP 블루)
칠성상회
연습전용 펜돌리기 스피닝 젤리펜
칠성상회

맨위로↑