Pakistani Family From Karachi Wins AI Championship in Silicon Valley

The Mayet Family from Karachi, Pakistan has won AI Family Challenge World Championship held in Silicon Valley, California on May 20, 2019.  The family's entry called "Cavity Crusher" uses artificial intelligence algorithm to monitor a child's brush time and determine their oral health habits to notify parents accordingly. It was organized by Iridescent, a global technology education nonprofit organization that empowers underrepresented young people to become self-motivated learners, inventors, and leaders.

Winners Salman Mayet, Yasir Salman and Fareeha Mapara. Photo courtesy Intel Corp

The AI Family Challenge partners with lifelong learning advocates and leading experts in AI, including those from Google.org, NVIDIA, Intel, and the Patrick J. McGovern Foundation.

The event was hosted at Intel's Santa Clara campus. It was the culmination of Iridescent's AI Family Challenge in which 7,500 people from 13 countries participated in a 15-week program that brings together families, schools, communities and industry mentors to create AI projects that solve local problems.

The family's journey to the AI Championship began in Karachi where Pakistan Science Club, in partnership with Iridescent brought this learning opportunity to Pakistan at two different sites. More than 40 families from Karachi participated in an 18-week long program. Through the AI Family Challenge program, the Mayet family learned about AI as it guided them through the identification of a problem in their community and applied what they learned to develop a solution for it using AI.

Related Links:

Haq's Musings

South Asia Investor Review

Pakistani Students Win First Place in Stanford Design Contest

Pakistan's Research Output Growing Fastest in the World

AI Research at NED University Funded By Silicon Valley NEDians

Pakistan Hi-Tech Exports Exceed A Billion US Dollars in 2018 

Pakistan Becomes CERN Member

Pakistani Tech Unicorns

Rising College Enrollment in Pakistan

Pakistani Universities Listed Among Asia's Top 500 Jump From 16 to ...

Pakistani Students Win Genetic Engineering Competition

Human Capital Growth in Pakistan

Pakistan Joins 3D Print Revolution

Pakistan Human Development in Musharraf Years

Views: 275

Comment by Riaz Haq on December 18, 2022 at 8:05am

What is ChatGPT? The AI chatbot talked up as a potential Google killer
After all, the AI chatbot seems to be slaying a great deal of search engine responses.

https://interestingengineering.com/science/chatgpt-ai-chatbot-googl...

ChatGPT is the latest and most impressive artificially intelligent chatbot yet. It was released two weeks ago, and in just five days hit a million users. It’s being used so much that its servers have reached capacity several times.

OpenAI, the company that developed it, is already being discussed as a potential Google slayer. Why look up something on a search engine when ChatGPT can write a whole paragraph explaining the answer? (There’s even a Chrome extension that lets you do both, side by side.)

But what if we never know the secret sauce behind ChatGPT’s capabilities?

The chatbot takes advantage of a number of technical advances published in the open scientific literature in the past couple of decades. But any innovations unique to it are secret. OpenAI could well be trying to build a technical and business moat to keep others out.

What it can (and can’t do)
ChatGPT is very capable. Want a haiku on chatbots? Sure.

How about a joke about chatbots? No problem.

ChatGPT can do many other tricks. It can write computer code to a user’s specifications, draft business letters or rental contracts, compose homework essays and even pass university exams.

Just as important is what ChatGPT can’t do. For instance, it struggles to distinguish between truth and falsehood. It is also often a persuasive liar.

ChatGPT is a bit like autocomplete on your phone. Your phone is trained on a dictionary of words so it completes words. ChatGPT is trained on pretty much all of the web, and can therefore complete whole sentences – or even whole paragraphs.

However, it doesn’t understand what it’s saying, just what words are most likely to come next.

Open only by name
In the past, advances in artificial intelligence (AI) have been accompanied by peer-reviewed literature.

In 2018, for example, when the Google Brain team developed the BERT neural network on which most natural language processing systems are now based (and we suspect ChatGPT is too), the methods were published in peer-reviewed scientific papers, and the code was open-sourced.

And in 2021, DeepMind’s AlphaFold 2, a protein-folding software, was Science’s Breakthrough of the Year. The software and its results were open-sourced so scientists everywhere could use them to advance biology and medicine.

Following the release of ChatGPT, we have only a short blog post describing how it works. There has been no hint of an accompanying scientific publication, or that the code will be open-sourced.

To understand why ChatGPT could be kept secret, you have to understand a little about the company behind it.

OpenAI is perhaps one of the oddest companies to emerge from Silicon Valley. It was set up as a non-profit in 2015 to promote and develop “friendly” AI in a way that “benefits humanity as a whole”. Elon Musk, Peter Thiel, and other leading tech figures pledged US$1 billion (dollars) towards its goals.

Their thinking was we couldn’t trust for-profit companies to develop increasingly capable AI that aligned with humanity’s prosperity. AI therefore needed to be developed by a non-profit and, as the name suggested, in an open way.

In 2019 OpenAI transitioned into a capped for-profit company (with investors limited to a maximum return of 100 times their investment) and took a US$1 billion(dollars) investment from Microsoft so it could scale and compete with the tech giants.

It seems money got in the way of OpenAI’s initial plans for openness.

Profiting from users
On top of this, OpenAI appears to be using feedback from users to filter out the fake answers ChatGPT hallucinates.

According to its blog, OpenAI initially used reinforcement learning in ChatGPT to downrank fake and/or problematic answers using a costly hand-constructed training set.

Comment by Riaz Haq on January 10, 2023 at 7:05pm

Why do your homework when a chatbot can do it for you? A new artificial intelligence tool called ChatGPT has thrilled the Internet with its superhuman abilities to solve math problems, churn out college essays and write research papers.

https://www.npr.org/2022/12/19/1143912956/chatgpt-ai-chatbot-homewo...

After the developer OpenAI released the text-based system to the public last month, some educators have been sounding the alarm about the potential that such AI systems have to transform academia, for better and worse.

"AI has basically ruined homework," said Ethan Mollick, a professor at the University of Pennsylvania's Wharton School of Business, on Twitter.

The tool has been an instant hit among many of his students, he told NPR in an interview on Morning Edition, with its most immediately obvious use being a way to cheat by plagiarizing the AI-written work, he said.

Academic fraud aside, Mollick also sees its benefits as a learning companion.

He's used it as his own teacher's assistant, for help with crafting a syllabus, lecture, an assignment and a grading rubric for MBA students.

"You can paste in entire academic papers and ask it to summarize it. You can ask it to find an error in your code and correct it and tell you why you got it wrong," he said. "It's this multiplier of ability, that I think we are not quite getting our heads around, that is absolutely stunning," he said.

A convincing — yet untrustworthy — bot
But the superhuman virtual assistant — like any emerging AI tech — has its limitations. ChatGPT was created by humans, after all. OpenAI has trained the tool using a large dataset of real human conversations.

"The best way to think about this is you are chatting with an omniscient, eager-to-please intern who sometimes lies to you," Mollick said.

It lies with confidence, too. Despite its authoritative tone, there have been instances in which ChatGPT won't tell you when it doesn't have the answer.

That's what Teresa Kubacka, a data scientist based in Zurich, Switzerland, found when she experimented with the language model. Kubacka, who studied physics for her Ph.D., tested the tool by asking it about a made-up physical phenomenon.

"I deliberately asked it about something that I thought that I know doesn't exist so that they can judge whether it actually also has the notion of what exists and what doesn't exist," she said.

ChatGPT produced an answer so specific and plausible sounding, backed with citations, she said, that she had to investigate whether the fake phenomenon, "a cycloidal inverted electromagnon," was actually real.

When she looked closer, the alleged source material was also bogus, she said. There were names of well-known physics experts listed – the titles of the publications they supposedly authored, however, were non-existent, she said.

"This is where it becomes kind of dangerous," Kubacka said. "The moment that you cannot trust the references, it also kind of erodes the trust in citing science whatsoever," she said.

Scientists call these fake generations "hallucinations."

"There are still many cases where you ask it a question and it'll give you a very impressive-sounding answer that's just dead wrong," said Oren Etzioni, the founding CEO of the Allen Institute for AI, who ran the research nonprofit until recently. "And, of course, that's a problem if you don't carefully verify or corroborate its facts."

Comment by Riaz Haq on April 1, 2023 at 5:18pm

The ChatGPT King Isn’t Worried, but He Knows You Might Be

https://www.opindia.com/2023/02/chahat-fateh-ali-khan-the-latest-vi...


By Cade Metz

Sam Altman sees the pros and cons of totally changing the world as we know it. And if he does make human intelligence useless, he has a plan to fix it.

I first met Sam Altman in the summer of 2019, days after Microsoft agreed to invest $1 billion in his three-year-old start-up, OpenAI. At his suggestion, we had dinner at a small, decidedly modern restaurant not far from his home in San Francisco.

Halfway through the meal, he held up his iPhone so I could see the contract he had spent the last several months negotiating with one of the world’s largest tech companies. It said Microsoft’s billion-dollar investment would help OpenAI build what was called artificial general intelligence, or A.G.I., a machine that could do anything the human brain could do.

Later, as Mr. Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project. As if he were chatting about tomorrow’s weather forecast, he said the U.S. effort to build an atomic bomb during the Second World War had been a “project on the scale of OpenAI — the level of ambition we aspire to.”

He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.


---------------

Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.

He told me that it would be a “very slow takeoff.”

When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.

If he’s wrong, he thinks he can make it up to humanity.

He rebuilt OpenAI as what he called a capped-profit company. This allowed him to pursue billions of dollars in financing by promising a profit to investors like Microsoft. But these profits are capped, and any additional revenue will be pumped back into the OpenAI nonprofit that was founded back in 2015.

His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.

If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.

But as he once told me: “I feel like the A.G.I. can help with that.”

Comment

You need to be a member of PakAlumni Worldwide: The Global Social Network to add comments!

Join PakAlumni Worldwide: The Global Social Network

Pre-Paid Legal


Twitter Feed

    follow me on Twitter

    Sponsored Links

    South Asia Investor Review
    Investor Information Blog

    Haq's Musings
    Riaz Haq's Current Affairs Blog

    Please Bookmark This Page!




    Blog Posts

    Pakistani Student Enrollment in US Universities Hits All Time High

    Pakistani student enrollment in America's institutions of higher learning rose 16% last year, outpacing the record 12% growth in the number of international students hosted by the country. This puts Pakistan among eight sources in the top 20 countries with the largest increases in US enrollment. India saw the biggest increase at 35%, followed by Ghana 32%, Bangladesh and…

    Continue

    Posted by Riaz Haq on April 1, 2024 at 5:00pm

    Agriculture, Caste, Religion and Happiness in South Asia

    Pakistan's agriculture sector GDP grew at a rate of 5.2% in the October-December 2023 quarter, according to the government figures. This is a rare bright spot in the overall national economy that showed just 1% growth during the quarter. Strong performance of the farm sector gives the much needed boost for about …

    Continue

    Posted by Riaz Haq on March 29, 2024 at 8:00pm

    © 2024   Created by Riaz Haq.   Powered by

    Badges  |  Report an Issue  |  Terms of Service