Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
人工智能 Artificial Intelligence AI
孫東稱政府正研究網信辦對AI監管規定 本港將會有相關法規
https://news.tvb.com/tc/local/643958f9b9...5%E8%A6%8F

國家網信辦提出加強管理生成式人工智能(AI)技術發展和應用。創新科技及工業局局長孫東表示,特區政府會以開放態度看待有關技術發展,但目前未有計劃在政府內部使用ChatGPT。

創新科技及工業局局長孫東指出:「由於OpenAI至今還未正式對香港地區開放ChatGPT使用權限,而且考慮ChatGPT帶來的潛在資訊保安風險,特區政府現在並沒有計劃將ChatGPT正式列為應用程式在政府內部使用。但從另一方面,考慮到生成式人工智能最近一段時間發展迅猛,特區政府會繼續本著非常開放態度,對待新技術發展,我們也對這技術未來走向保持密切關注。」

而在立法會財委會特別會議,孫東透露局方已經與業界磋商,香港是否有條件追上人工智能發展步伐,及如何開展相關技術研究等,政府正研究內地網信辦對人工智能的監管規定,相信香港不久將來亦會有相關法規。

Elon Musk爆料│美國政府能看twitter所有私訊 警告AI會摧毀文明
https://www.stheadline.com/world-live/32...7%E6%98%8E
[Image: 20230417_NEWS_%E9%A6%AC%E6%96%AF%E5%85%8B_cy.png]

馬斯克威脅告微軟 指微軟非法用Twitter資料非法訓練AI
https://std.stheadline.com/realtime/arti...E7%B7%B4AI

美國擬設特別工作組防 AI 威脅
https://hk.finance.yahoo.com/news/%E7%BE...35410.html

美國國土安全部計劃成立人工智能(AI)特別工作組,審查人工智能(AI)引發的潛在危險和可用之處。

國土安全部部長馬約爾卡斯(Alejandro Mayorkas)向外交關系委員會發表書面講話,表示人工智能將通過不同方式改變威脅格局,因此必須加以應對,增強現有工具以抵禦相關威脅。

AI盜臉︱內地女網紅成色情AV女主角 她懷疑「臉被盜了!」
https://std.stheadline.com/realtime/arti...C%E4%BA%86
[Image: _2023042916502022274.jpg]


微軟:中國將成為 ChatGPT 主要對手 「中國 AI 發展處於領先地位」
https://www.msn.com/zh-hk/lifestyle/gadg...6049&ei=16

Bill Gates 預言 AI 半年內可教導兒童 老師職業都要被取代
https://www.msn.com/zh-hk/lifestyle/gadg...28b8&ei=19
Reply
「AI教父」警告有AI系統正超越人類智慧 憂技術落入壞人手中
https://news.tvb.com/tc/world/6451eab6fe...B%E4%B8%AD

有「AI教父」之稱的電腦科學家欣頓辭去在Google的工作。他公開談論人工智能的風險,警告一些系統正超越人類智慧,擔心技術落入壞人手中。

75歲的電腦科學家欣頓,辭職後接受多間傳媒訪問,大談人工智能的風險,並警告人工智能經深度學習後,其數據與資訊容量,將超越人類大腦。

電腦科學家欣頓指︰「現時我們看見人工智能聊天機械人,如GPT-4,超越人類所擁有的常識,並遠超於人類。在推理方面,雖較人類差,但已能做到簡單推理。」

欣頓在接受英國廣播公司訪問時,稱現時發展中的人工智能,其智慧與成長速度與預期相比出現落差。人工智能不僅能個別學習,更能自動與即時把知識分享給其他人工智能,因此它們擁有的龐大資訊量、遠超於人類,並將於不久的未來較人類聰明。

有「AI教父」之稱的欣頓,其對神經網絡與深度學習的研究,為現時的人工智能如ChatGPT打下基礎,但他卻對其工作感到後悔。他表示,人工智能極速發展,不單能取代人類,導致人類失業,亦不排除散布錯誤資訊,讓人類難以分辨真假。最壞情況更會出現不法分子用人工智能做壞事,對人類與社會構成極大風險。

但即使人工智能存在風險,欣頓表示在短期內其帶來的好處遠超於風險,因此不應停止發展,並指出政府有責任確保人工智能沿正確方向發展。

研究料未來5年AI淘汰全球2600萬個職位
https://news.rthk.hk/rthk/ch/component/k...230501.htm

世界經濟論壇報告預計,未來5年人工智能(AI)將導致勞動力市場出現重大顛覆,ChatGPT等AI應用程式出現,將取代很多涉及推理、溝通及協調的角色,產生顯著影響。

該項研究訪問了全球45個經濟體的逾800家企業,這些企業合共聘用了約1130萬名員工。約75%受訪企業表示,預計5年內採用AI技術,估計將淘汰多達2600萬個職位,包括收銀員、票務員、數據輸入及會計等範疇。

報告指出,除了AI,數碼化、綠色能源轉型及供應鏈回流等其他經濟發展,亦會令全球近4分1的工作職位出現變化。

報告認為,相較於經濟增長放緩、供應短缺及通脹等其他宏觀經濟因素,目前AI對勞動力前景的威脅仍然較小。創造就業機會可能來自於促進企業綠色轉型、ESG標準更廣泛應用、全球供應鏈重新定位等。

人工智能教父後悔研發AI 憂對社會和人類構成風險
https://news.rthk.hk/rthk/ch/component/k...230502.htm

有人工智能(AI)教父之稱的科學家辛頓認為,人工智能領域的進步對社會和人類構成深遠風險,坦言對研發人工智能技術感到後悔。

75歲的辛頓早前辭去在Google的職務。他說,離開除了因為年齡,亦是為了毋須在考慮對Google的影響下,談論人工智能的危險,他擔心人工智能會創造出令人信服的虛假圖像和文本,從而創造一個人類無法再知道甚麼是真的世界,亦擔心會有壞人利用它來做壞事。

辛頓又說,曾經認為人工智能起碼要30至50年才能夠進化到較人類更聰明,但他現在認為這個情況不再遙遠。

Google首席科學家迪恩回應表示,公司仍然致力對人工智能採取負責任態度,在不斷學習了解新風險的同時,亦大膽創新。

AI教父離開Google 警告人工智慧將比人類更聰明
https://std.stheadline.com/realtime/arti...0%E6%98%8E
[Image: _2023050220273910424.jpg]

IBM宣布暫停招聘7800個崗位 將由人工智能取代
https://www.hk01.com/article/893758
[Image: ahgmcUIaC7dSshyFD0jRzDDa1yneLAUTCFR9LQhU...w1920r16_9]
Reply
警告: 超級人工智能AGI會超越人類的監控/全球緊急叫停人工智能訓練? [中文字幕]
https://www.youtube.com/watch?v=QPYigF-YV_I
[youtube]QPYigF-YV_I[/youtube]
Reply
Full interview: "Godfather of artificial intelligence" talks impact and potential of AI
https://www.youtube.com/watch?v=qpoRO378qRY
[youtube]qpoRO378qRY[/youtube]

AI 'godfather' quits Google over dangers of Artificial Intelligence - BBC News
https://www.youtube.com/watch?v=DsBGaHywRhs
[youtube]DsBGaHywRhs[/youtube]
Reply
60 Years of Artificial Intelligence at Stanford
https://www.youtube.com/watch?v=Cn6nmWlu1EA
[youtube]Cn6nmWlu1EA[/youtube]

AI's Human Factor | Stanford's Dr. Fei-Fei Li and OpenAI CTO Mira Murati
https://www.youtube.com/watch?v=9B02MzWwkSo
[youtube]9B02MzWwkSo[/youtube]

Dr. Fei-Fei Li on Human-Centered AI
https://www.youtube.com/watch?v=06M_xmHmDfw
[youtube]06M_xmHmDfw[/youtube]

Signal360 Archives: Dr. Fei Fei Li
https://www.youtube.com/watch?v=QKLRvrTospk
[youtube]QKLRvrTospk[/youtube]
Reply
IBM料三成文職5年內被AI取代 阿里蔡崇信籲不用擔心
https://hk.on.cc/hk/bkn/cnt/finance/2023...2_001.html

近日推出升級版WatsonX平台,提供基礎模型和生成式人工智能(AI)、AI開發工作室、數據庫和管理工具包,變相加入AI競賽的美股國際商業機器公司(IBM),其首席執行官Arvind Krishna相信,AI將取代許多白領文職工作,例如那些更重複、反覆做同樣任務的角色,其中有30%可能在5年內消失。

停聘AI可勝任崗位

Krishna向外媒表示,IBM將停止招聘AI可以勝任的職位,惟同時補充,這些職位縮減是通過自然減員來實現,AI還可創造更多職位,隨着技術的競爭力愈來愈強,作為競爭優勢的來源,公司將需要僱用更多的人從事這些工作。換言之,創造價值的工作崗位會增加,總就業人數將整體增加,而更多後勤職位將減少。

談及員工在家工作(WFH)問題,Krishna直言,WFH並不是員工職業生涯的最佳選擇,當中較適合一些着重個人貢獻的工種,皆因WFH無法讓人觀察實際工作能力或待人處事態度,這工作形式並不利打工仔加薪升遷。目前IBM沒堅持要求員工返回辦公室工作,僅鼓勵員工一周有3天回到辦公室便可。

SAS擲7.8億拓解決方案

中國科企巨頭阿里巴巴(09988)董事會執行副主席蔡崇信則派定心丸,指不用過於擔心AI機械人比人類更聰明甚至取代人類,人類的大腦還有成千上億的細胞還沒被認識和探索,機械人很難複製人類相互之間的關係、情商、感情、欲望等,也不擁有人類之間諸如父子、夫妻及朋友等關係,很難產生像人類一樣的「下一代」。

此外,數據分析公司SAS周三宣布,將於未來3年投資10億美元(約7.8億港元),進一步開發針對特定行業獨特需要的進階數據分析解決方案,受惠的行業包括銀行業、政府、保險業、醫療保健業、零售業及能源等領域。


據報中國科技公司擬減少依賴美晶片 研以低階晶片開發尖端AI
http://www.aastocks.com/tc/stocks/news/a...-news/AAFN

綜合研究論文及相關消息報道,美國的制裁正促使中國科技公司加快研究步伐,尋求在不依賴美國最先進晶片的情況下開發尖端AI。

據悉,中國公司正研究一些或許能夠讓它們利用相對較少或沒那麼強大的半導體打造出尖端AI性能的技術。它們還研究如何結合使用不同類型的晶片,以避免對任何一種硬件的依賴。

料8成工作數年內由AI取代 AI權威:反而是好事
https://www.am730.com.hk/%E5%9C%8B%E9%9A...%8B/376175


美國人工智能(AI)權威戈策爾(Ben Goertzel)預計,AI可能在未來幾年內取代80%的人類工作,但他認為這並非壞事。

現年56歲、生於巴西的戈策爾是數學家、認知科學家,也是知名的機械人創造者。他是SingularityNET的創辦人兼行政總裁,旗下團隊正研究「通用人工智能」(Artificial General Intelligence,AGI),是具備人類認知能力的人工智能

專家:希望AI能像人一樣聰明
戈策爾上周出席巴西里約熱內盧網絡高峰會(Web Summit)時接受傳媒訪問,表示預期AGI數年內就會面世。被問及距離AGI還有多遠,他答道,如果我們希望機器真的能像人一樣聰明,且在面臨未知狀況時具備同樣的靈活度,就必須在訓練和編寫程式以外,做出重大躍進,「我們還沒達到這個目標,但我認為,有理由相信我們距離這個目標只剩下幾年、而非數十年」。

雖存爭議 不支持暫停AI研究
對於ChatGPT等人工智能引發的爭議和風險,戈策爾不認為應為此暫停相關研究,「這些AI系統非常有趣,但它們不具備像人類一樣的AGI能力,無法完成科學研究這類複雜的多階段推理,也無法創造超出學習資料範圍的新事物」。

戈策爾說,「人們因為它們可能散播錯誤訊息而建議暫停AI發展,我覺得這種說法非常奇怪。」若依照這種邏輯,「為甚麼不禁止互聯網?……我認為,應該擁有自由社會,就像網絡不該被禁止,我們也不該禁止AI」。

為何AI取代人類反而是好事?
關於AI可能取代人類工作,戈策爾表示,「我推測是,在沒有AGI情況下,大概有8成的人類工作會被取代」。他不是指ChatGPT,而是指將在未來幾年出現的類似系統。

戈策爾指出,「我不認為這會構成威脅,這反而是好事。除了為謀生而工作,人們可以尋找更有意義的事情來做,」他認為,幾乎所有文書工作都應自動化。

AI機械人可充當護理員
機械人能為社會所做的貢獻,戈策爾以在峰會亮相的機械人護士Grace為例,說美國有很多長者孤單地住在安老院,缺乏情感和社會支援,若將機械人引進護理機構,它們將能回答問題、聆聽故事、協助長者和孩子通話,或上網購物,這可以改善長者的生活,「進展到AGI之後,它們會是更好的夥伴」。

戈策爾說,「這樣一來,人類的工作職位也不會被取代,因為基本上護理工作沒有足夠人手應付」。他相信機械人在教育和家庭服務等領域也很有市場。

AI版「分身」提供「虛擬女友」服務 美國Snapchat性感網紅料月賺近4千萬元
https://www.am730.com.hk/%E5%9C%8B%E9%9A...%83/376633
Reply
七成港企擬引入AI
https://orientaldaily.on.cc/content/fina...00202_031/

OpenAI行政總裁稱人工智能行業受監管是至關重要
https://news.rthk.hk/rthk/ch/component/k...230517.htm

人工智能公司OpenAI行政總裁奧爾特曼,就人工智能會否構成安全風險,到美國國會參議院作證。

OpenAI製作的ChatGPT,近期在人工智能界掀起熱潮。該聊天應用程式可模擬人類方式書寫文章、劇本、詩詞,以及解決電腦的編碼問題,引發監管機構關注涉及的私隱及法律等問題,希望在不扼殺創新的前提下,為人工智能發展設置圍欄。

奧爾特曼作證時表示,人工智能有潛力解決人類面對的最大挑戰,例如氣候變化及癌症治療等,同時承認人工智能的進步,將對勞動力帶來重大影響。他形容人工智能目前發展的方向及模型,依然是一種工具,而不是一種生物,但承認隨著人工智能變得越來越強大,行業受政府監管是至關重要。

被問到人工智能對音樂的影響時,奧爾特曼表示,內容創作者對於他們的聲音、肖像或受版權保護的內容,如何被用作訓練人工智能模型,應該有發言權。

微軟CEO反駁控制OpenAI指控 稱細公司有力撼大企
https://hk.on.cc/hk/bkn/cnt/finance/2023...2_001.html

AI詐騙正在全國爆發!公司老板被騙430萬
https://hk.finance.yahoo.com/video/ai%E8...00528.html

疑AI生成五角大廈爆炸圖瘋傳 美股一度崩跌
https://hk.news.yahoo.com/%E7%96%91ai%E7...01521.html
五角大樓爆炸假照片瘋傳 AI圖一度引發美股恐慌
https://std.stheadline.com/realtime/arti...0%E6%85%8C
Reply
ANI, AGI and ASI - what do they mean?
https://youevolve.net/ani-agi-and-asi-wh...they-mean/
Reply
什么是ANI、AGI、ASI?
https://zhuanlan.zhihu.com/p/33910684
Reply
The truth about the AI alphabet soup (ANI, AGI, ASI)
https://bdtechtalks.com/2022/11/03/artif...abet-soup/

AI is frequently explained using the categories artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super-intelligence (ASI).[1] Despite this strange conceptual framework providing nothing of real value, it finds its way into many discussions.[2] If unfamiliar with these categories, consider yourself lucky and move on to another, more consequential article. If you are unlucky, I invite you to keep reading.

First and foremost, bemoaning categorizations — as I am about to do — has limited value because categories are arbitrarily similar and distinct, depending on how we classify things. For example, the Ugly Duckling Theorem demonstrates that swans and ducklings are identical if we wish to manipulate the properties for comparisons. All differences are meaningless unless we have some prior knowledge about those differences. Alas, this article will unpack these suspicious categories from a business perspective.

Artificial narrow intelligence (ANI) is often conflated with weak artificial intelligence. John Searle, philosopher and professor at the University of California, explained in his seminal 1980 paper, “Minds, Brains, and Programs,” that weak artificial intelligence would be any solution that is both narrow and a superficial look-alike to intelligence. Searle explains that such research would be helpful in testing hypotheses about segments of minds but would not be minds.[3] ANI reduces this by half and allows researchers to focus on the narrow and superficial and ignore hypotheses about minds. In other words, ANI purges intelligence and minds and makes artificial intelligence “possible” without doing anything. After all, everything is narrow, and if you squint hard enough, anything is a superficial look-alike to intelligence.

Artificial general intelligence (AGI) is the idealized solution many conceive when thinking about AI. While researchers work on the narrow and superficial, they talk about AGI, which represents the single story of AI, dating back to the 1950s, with a revival in the past decade. AGI implies two things about a solution that should not apply to business-centric problem-solving. First, a program has the general aptitude for human intelligence (perhaps all human intelligence). Second, an AGI is a general problem solver or a blank slate meaning any knowledge of a problem is rhetorical and independent of a strategy to solve that problem.[4] Instead, the knowledge depends on some vague, ill-defined aptitude relating to the multidimensional structure of natural intelligence. If that sounds ostentatious, it’s because it is.

Artificial super-intelligence (ASI) is a by-product of accomplishing the goal of AGI. A commonly held belief is that general intelligence will trigger an “intelligence explosion” that will rapidly trigger super-intelligence. It is thought that ASI is “possible” due to recursive self-improvement, the limits of which are bounded only by a program’s mindless imagination. ASI accelerates to meet and quickly surpass the collective intelligence of all humankind. The only problem for ASI is that there are no more problems. When ASI solves one problem, it also demands another with the momentum of Newton’s Cradle. An acceleration of this sort will ask itself what is next ad infinitum until the laws of physics or theoretical computation set in.

The University of Oxford scholar Nick Bostrom claims we have achieved ASI when machines have more intelligent than the best humans in every field, including scientific creativity, general wisdom, and social skills.[5] Bostrom’s depiction of ASI has religious significance. Like their religious counterparts, believers of ASI even predict specific dates when the Second Coming will reveal our savior. Oddly, Bostrom can’t explain how to create artificial intelligence. His argument is regressive and depends upon itself for its explanation. What will create ASI? Well, AGI. Who will create AGI? Someone else, of course. AI categories suggest a false continuum at the end of which is ASI, and no one seems particularly thwarted by their ignorance. However, fanaticism is a doubtful innovation process.

Part of our collective problem when talking about AI is that we entrench our thinking in prevalent but useless dichotomies.[6] False dichotomies create an artificial sense that there is an alternative. ANI, AGI, and ASI suggest some false balance among various technologies by presenting multiple sides of an argument that don’t exist. Even if we accept the definition of ANI and ignore its triviality, there is nothing persuasive about AGI or ASI. Mentioning something that will not exist to evaluate today’s technology uttered with a catchier name like ANI is odd. We do not compare birds to griffins, horses to unicorns, or fish to sea serpents. Why would we compare (or scale) computation to human intelligence or the intelligence of all humans?

Any explanation that includes AGI or ASI distorts reality. Anchoring is a cognitive bias in which an individual relies too heavily on an initial piece of information (known as the “anchor”) when making decisions. Studies have shown that anchoring is challenging to avoid, even when looking for it.[7] Even if we recognize AGI and ASI as significantly wrong or misplaced, they can still distort reality and create misalignments. We must not be fooled by a false dichotomy and a false balance.

AI is not three things. It is not something that scales by “intelligence” or fits neatly into three bins. These categories do not delineate specific technologies, highlight research areas, or capture some continuum where one starts by working on ANI and finishes with ASI. They’re nonsense. AI is one thing: a singular and unprecedented goal to recreate intelligence ex nihilo. However, this goal is permanently misaligned with business.

Business goals cannot be totalized and absorb everything around them because corporate communication, which includes all strategies, is only effective when it can’t be misunderstood. Unless you plan to align your business with AI’s singular and unprecedented goal, you must be mindful when calling your goals AI since you cannot say “AI” nowadays if you ever want to be understood. As we call more and more things “AI,” the task of communicating purpose and direction becomes even more difficult. However, saying ANI, AGI, or ASI does not help matters. It hurts communication. The best advice for technical leaders is to avoid false continuums, false dichotomies, and false balance. As media critic Jay Rosen explains, borrowing a phrase from American philosopher Thomas Nagel, “false balance is a ‘view from nowhere.‘”
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)