Connect with us

Technology

Push For Self-Driving Car Rules Overlooks Lack Of Federal Expertise In AI Tech

mm

Published

on

A House subcommittee approved critical legislation for the fast-developing driverless car movement this week that sets specific rules for how many such vehicles can be on U.S. roads and the federal government’s role in regulating them. While it would give companies developing the technology the national framework they want, it also raises questions over who is best-suited to ensure this cutting-edge technology is safe.

Under proposed rules that will come before the full House for a vote in September, manufacturers could put up to 100,000 autonomous vehicles per company on U.S. roads every year, with a requirement that they certify the safety of those vehicles with the National Highway Traffic Safety Administration, or NHTSA. If approved, the proposal would preempt the current patchwork of state-level rules for operating self-driving cars and trucks. They build off a basic set of guidelines issued by President Barack Obama’s Transportation Department last September.

“The need for this legislation was laid out by the Obama Administration,” said Rep. Bob Latta, the Ohio Republican who chairs the House Subcommittee on Digital Commerce and Consumer Protection. “From the front bumper to the back bumper, whether it is a car, a pickup truck or a van, how the vehicle works and is designed should be the province of the federal government as has been the case for more than 50 years.”

The Workplace Of The Future – Or Now?

California, which has issued permits to 36 companies testing about 200 autonomous vehicles in the state, would no longer be able to set its own guidelines if the federal proposal becomes law. But whether federal safety officials ultimately are better-suited to validate manufacturers’ safety claims for the artificial intelligence, algorithms and next-generation vision and sensing technologies that go into self-driving cars remains in question.

“There’s this whole gray ground about who is in the best position to regulate these new cars, which go beyond the traditional car into almost driving a computer and software,” said attorney Sharon Klein, a partner in the autonomous vehicle practice for Pepper Hamilton LLP. “There’s been this debate about whether NHTSA as an agency should continue to be involved in deployment as well.”

The new rules rely on companies to certify the safety of their systems. The concern is that NHTSA, the country’s primary auto safety regulator, lacks the software and computer science expertise needed to validate their technical claims, Klein said.

“It comes down to a question of who is going to be watching the manufacturers?” she told Forbes.

Along with the growing number of robotic vehicles in California, states including Michigan, Massachusetts and Nevada are home to ever larger test fleets. Waymo, the Alphabet Inc. unit created last year to commercialize Google’s self-driving car technology, has already begun a large-scale public test in Arizona that may eventually grow to hundreds of driverless vehicles ferrying people around Phoenix.

General Motors, Uber, Lyft, Ford companies are following suit, and Tesla intends to give its electric vehicles full autonomous capability within the next year or two via over-the-air updates.

Tesla Loses ‘Most Valuable U.S. Carmaker’ Crown As Stock Takes $12 Billion Hit

In 2016, NHTSA and the Transportation Department defined 15 areas that manufacturers need to comply with when they put self-driving vehicles on the road. That basic framework will put the agency in an administrative role, ensuring certain benchmarks are met, despite lacking specific knowledge of the software and algorithms powering autonomous systems.

“There’s no way NHTSA has the technical capability to do this right now,” said Mike Ramsey, an automotive tech analyst with Gartner Research in Detroit.

“NHTSA is really the arbiter of ‘does it appear that you have done all the things you need to do?’ It’s counting on the fear of deaths and lawsuits that could arise to prevent carmakers from doing something egregious,” he said.

Insurance companies will also act as a “shadow regulator” to keep manufacturers honest in their technical claims, Ramsey said. “The fact is unsafe (autonomous) technology would quickly become uninsurable.”

Consumer Watchdog, a safety advocacy group, called on congress to give NHTSA more resources to better address this major technological shift and opposed the reduced role of individual states to regulate robotic cars and trucks.

“Lost in the hyperbole over robot cars is a realistic assessment of the likely costs to both consumers and taxpayers particularly over the coming decades, when robot cars and human drivers will share a ‘hybrid highway,’ John Simpson, director of the Santa Monica, California-based group’s privacy project, said in a statement.

“Pre-empting the states’ ability to fill the void left by federal inaction leaves us at the mercy of manufacturers as they use our public highways as their private laboratories however they wish with no safety protections at all,” said Simpson, who called on California House members to oppose the subcommittee proposal.

For its part, the California DMV said it doesn’t comment on pending legislation. The department has been tasked with setting regulations for the testing and deployment of autonomous vehicles on state roadways since 2014. – Written by 

Continue Reading
Advertisement
Comments

Technology

How A BlackBerry Wiretap Helped Crack A Multimillion-Dollar Cocaine Cartel

mm

Published

on

By

On August 18, 2017, four men travelling in a dual-engine speedboat carrying 1,590 pounds of cocaine were intercepted by the U.S. Coast Guard northwest of the Galapagos Islands.

The federal agents manning the channel chose to launch a helicopter to hover over the boat. With this aggressive move, the men began to jettison the bales of coke, each with their own GPS tracker so they could be picked up at a later date, according to the government’s narrative. They attempted to flee, and when they ignored the warning shots from the helicopter, the chopper fired rounds directly at the boat, disabling it.

After the bales were collected, the government realized they had just stopped a huge amount of cocaine from entering the U.S. In total, it carried a street value of $25 million. The four men, all Ecuadorians, were swiftly arrested and charged.

Though the cartel had set up a sophisticated, multilayered operation that sought to slip coke into the country and up to Ohio via land, air and sea, they had made a crucial error: They used BlackBerry phones. As the drug barons chatted about shifting cocaine and how to avoid the narcs over BlackBerry Messenger, a wiretap on a server in Texas was quietly collecting all their communications.

In a case that’s Narcos meets The Wire, federal agents have, since June 2017, been listening in on that server. And beyond that interception, Forbes can exclusively reveal it is yielding results. On Friday, an Ohio court is unsealing charges against one of the crew’s top brass: Francisco Golon-Valenzuela, 40.

Known as El Toro, Spanish for The Bull, the Guatemalan was extradited from Panama earlier this week and is appearing before a magistrate judge today. (Forbes hasn’t yet made contact with his counsel for a response but will update if comment is forthcoming.)

Described as one of various organizers and leaders of the unnamed cartel, El Toro is charged with conspiring to distribute at least 5 kilograms or more of cocaine on the high seas. As a result, he’s facing between 10 years and life in prison.

A key to BlackBerry 

For any organized crime operation, BlackBerry has always been a poor choice. No longer extant since being decommissioned in spring this year, BlackBerry Messenger did encrypt messages, but the Canadian manufacturer of the once-ubiquitous smartphone had the key. And all messages went through a BlackBerry-owned server. If law enforcement could legally compel BlackBerry to hand over that key, they would get all the plain-text messages previously garbled into gibberish with that key.

Compare this to genuine, end-to-end encrypted messaging apps like WhatsApp or Signal; they create keys on the phone itself and the device owner controls them. To spy on those messages, governments either have to hack a target device or have physical access to the phone. Both are tricky to do, especially for investigations of multinational criminal outfits. Police can put a kind of tap on a WhatsApp server, known as a pen register.

This will tell them what numbers have called or messaged one another, and at what date and time, but won’t provide any message content. This makes those apps considerably more attractive to privacy-conscious folk than those where the developer holds the keys, though sometimes to the chagrin of law enforcement.

It’s unclear how or when the DEA got access to the BlackBerry server. A so-called Title III order was issued, granting them court approval to carry out the wiretap, though that remains under seal.

It proved vital to the investigation. “There would be no case without the without the Title III on BlackBerry Messenger,” said Dave DeVillers, who was recently nominated as U.S. Attorney for the Southern District of Ohio. “The defendants, the seizures, the conspiracy were all identified with the Title III.”

A spokesperson for BlackBerry said: “We do not speculate or comment upon individual matters of lawful access.” The company has, however, previously made its stance on encryption public: Unlike other major tech providers like Apple or Google, BlackBerry will hand over the keys if it’s served with a legitimate law enforcement request.

If the police did receive a key from BlackBerry, it wouldn’t be the first time. Back in 2016, it emerged that the Royal Canadian Mounted Police (RCMP) had decrypted more than one million BlackBerry messages as part of a homicide investigation dating back to 2010.

As per reports from that time, it’s possible to use one of BlackBerry’s keys to unlock not just one device’s messages, but those on other phones too. Forbes asked the DOJ whether investigators would’ve been able to access other, innocent people’s BlackBerry messages as part of this wiretap, but hadn’t received a response at the time of publication.

Fishermen and spies

However those BlackBerry messages were intercepted, they helped illuminate a dark criminal conspiracy constructed of myriad parts. As revealed in today’s indictment, made known to Forbes ahead of publication, the gang employed “load coordinators.” Think of them as project managers, helping locate drivers for trucks and boats while finding people to invest in the cocaine.

Fishermen and other maritime workers were also allegedly recruited. They would help both in refueling the drug baron’s ships, but also helping transport the powder, prosecutors said.

Other individuals became ad hoc spies, sharing information on the activities and locations of police and military personnel trying to intercept shipments, according to the government’s allegations. Other coconspirators sheltered individuals who were at risk of extradition—not that it saved El Toro.

Forbes first became aware of the investigation in 2017, when a search warrant detailed various BlackBerry intercepts. In one, a pair of cartel employees discussed having to put some cocaine transports on hold because of a multinational maritime exercise—the Unitas Pacifico 2017—taking place in their shipment lanes, according to the warrant. BlackBerry wasn’t the only major tech provider to help on the case; That search warrant was for a Google account linked to one of the suspects, which investigators believe was used for further logistics.

The investigation has revealed that the 2017 seizure wasn’t the only time the cops had disrupted what was evidently a criminal enterprise worth hundreds of millions. In May 2016, long before the BlackBerry wiretap went up and the investigation into the cartel had begun in earnest, U.S. authorities intercepted 1,940 pounds of coke near the Guatemalan-Mexico border, worth another $30 million.

Despite such successes, DeVillers told Forbes the American government will never interdict its way to ending the drug trade. “We can only disrupt it,” he added. “And if we turn the tools used by the cartels to run their organization against them, we do just that.”

-Thomas Brewster; Forbes

Continue Reading

Health

How Virtual Therapy Apps Are Trying To Disrupt The Mental Health Industry

mm

Published

on

By

Millions of Americans deal with mental illness each year, and more than half of them go untreated. As the mental health industry has grown in recent years, so has the number of tech startups offering virtual therapy, which range from online and app-based chatbots to video therapy sessions and messaging. 

Still a nascent industry, with most startups in the early seed-stage funding round, these companies say they aim to increase access to qualified mental health care providers and reduce the social stigma that comes with seeking help. 

While the efficacy of virtual therapy, compared with traditional in-person therapy, is still being hotly debated, its popularity is undeniable. Its most recognizable pioneers, BetterHelp and TalkSpace, have enrolled nearly 700,000 and more than 1 million users respectively. And investors are taking notice.

Funding for mental health tech startups has boomed in the past few years, jumping from roughly $100 million in 2014 to more than $500 million in 2018, according to Pitchbook. In May of this year, the subscription-based online therapy platform Talkspace raised an additional $50 million, bringing its total funding to just under $110 million since its 2012 inception.

The ubiquity of smartphones, coupled with the lessening of the stigma associated with mental health treatment have played a large role in the growing demand for virtual therapy. Of the various services offered on the Talkspace platform, “clients by far want asynchronous text messaging,” says Neil Leibowitz, the company’s chief medical officer.

Users seem to prefer back-and-forth messaging that isn’t restricted to a narrow window of time over face-to-face interactions. At BetterHelp, founder Alon Matas notes that older users are more likely to go for phone and video therapy sessions, whereas younger users favor text messaging.

“Each generation is getting progressively more mobile-native,” says John Prendergass, an associate director at Ben Franklin Technology Partners’ healthcare investment group, “so I think we’re going to see people become increasingly more accustomed, or predisposed, to a higher level of comfort in seeking care online.”

The ease and convenience of virtual therapy is another draw, particularly for busy people or those who live in rural areas with limited access to therapy and a range of care options.

Alison Darcy, founder and CEO of Woebot, a free automated chatbot that uses artificial intelligence to provide therapeutic services without the direct involvement of humans, says that with Woebot and other similar services, there is no need to schedule appointments weeks in advance and users can receive real-time coaching at the moment they need it, unlike traditional therapy. The sense of anonymity online can also lead to more openness and transparency and attracts people who normally wouldn’t seek therapy.

Along with stigma, the cost of therapy has historically acted as a barrier to accessing quality mental-health care. Health insurance is often unlikely to cover therapy sessions. In most cities, sessions run about $75 to $150 each, and can go as high as $200 or more in places like New York City. Web therapists don’t have to bear the expense of brick-and-mortar offices, filing paperwork or marketing their services, and these savings can be passed on to clients. 

BetterHelp offers a $200-a-month membership that includes weekly live sessions with a therapist and unlimited messaging in between, while Talkspace’s cheapest monthly subscription at $260-a-month, offers unlimited text, video and audio messaging.

But virtual therapy, particularly text-based therapy, is not suitable for everyone. Nor is it likely to make traditional therapy obsolete. “Online therapy isn’t good for people who have severe mental and relational health issues, or any kind of psychosis, deep depression or violence,” says Christiana Awosan, a licensed marriage and family therapist. 

At her New York and New Jersey offices, she works predominantly with black clients, a population that she says prefers face-to-face meetings. “This community is wary of mental health in general because of structural discrimination,” Awosan says. “They pay attention to nonverbal cues and so they need to first build trust in-person.”  

Virtual therapy apps can still be beneficial for people with low-level anxiety, stress or insomnia, and they can also help users become aware of harmful behaviors and obtain a higher sense of well-being. 

Sean Luo, a psychiatrist whose consultancy work focuses on machine learning techniques in mental health technology, says: “This why some of these companies are getting very high valuations. There are a lot of commercialization possibilities.” He adds that from a mental health treatment perspective, a virtual therapy app “isn’t going to solve your problems, because people who are truly ill will by definition require a lot more.”

Relying on digital therapy platforms might also provide a false sense of security for users who actually need more serious mental-health care, and many of these apps are ill-equipped to deal with emergencies like suicide, drug overdoses or the medical consequences of psychiatric illness. “The level of intervention simply isn’t strong enough,” says Luo, “and so these aspects still need to be evaluated by a trained professional.

Ruth Umoh, Diversity and Inclusion Writer, Forbes Staff.

Continue Reading

Technology

AI 50 Founders Say This Is What People Get Wrong About Artificial Intelligence

mm

Published

on

By

Forbes’ new list of promising artificial intelligence companies highlights how the technology is creating real value across industries like transportation, healthcare, HR, insurance and finance.

Naturally, the founders of the honoree companies are excited about the technology’s benefits and, in their roles, spend a lot of time thinking and talking about its strengths and limitations. Here’s what they think people get wrong about artificial intelligence.

Affectiva CEO Rana el Kaliouby says she’s too often encountered the idea that AI is “evil.”

“AI—like any technology in history—is neutral,” she says. “It’s what we do with it that counts, so it’s our responsibility, as an AI ecosystem, to drive it in the right direction.” 

Companies need to be aware of how AI could widen bounds of inequality, she adds: “Any AI that is designed to interact with humans—Affectiva’s included—must be evaluated with regards to the ethical and privacy implications of these technologies.”

Sarjoun Skaff, CTO and cofounder of Bossa Nova Robotics, says that the biggest misconception he encounters is that artificial intelligence is actually, well, intelligent. 

“The truth is much more mundane,” he says. “AI is a very good pattern-matching tool. To make it work well, though, scientists need to understand the details of how it internally works and not treat it as an ‘intelligent’ black box. At the end of the day, making good use of great pattern matching still belongs to humans.”

Similarly, Aira cofounder Suman Kanuganti says that the public has “over-inflated expectations” for artificial intelligence.

“Garry Kasparov sums it up nicely: ‘We are in the beginning of MS-DOS and people think we are Windows 10,’” Kanuganti says. “AI realistically is still like a 3-year-old child at this stage. When it works, it feels magical. It does some things well, but there’s still a long way to go.”

So, no, we are nowhere close to “artificial general intelligence,” or AGI, where machines are actually as smart as humans.

“We’re still a long way from AI having the general intelligence of even a flea,” says David Gausebeck.

Despite the tendency to overestimate what artificial intelligence can do, the difficulty of building an effective system is often underestimated, some founders say.

“The systems you need to implement and manage machine learning in production are often much more complex than the algorithms themselves,” says Algorithmia CEO Diego Oppenheimer. “You can’t throw models at a complex business problem and expect returned value. You need to build an ecosystem to manage those models and connect their intelligence to your applications.” 

Put another way, you can’t just “sprinkle on some artificial intelligence like a magic sauce,” says Feedzai CEO Nuno Sebastiao.

One of the most common tropes that a handful of founders brought up was the idea that artificial intelligence is primarily a job killer.

People.ai founder Oleg Rogynskyy says that AI should be seen as a creator of new opportunities instead of a destroyer of jobs.

“In a nutshell, AI does two things: It automates repetitive low-value-add work for humans (which will indeed take low-complexity jobs away), which we think of as ‘Autopilot,’  and it guides people on how to do their work or other activities better (which makes humans more effective at what they do), which we call ‘Copilot,’” he says. “While Autopilot can take simple, repetitive and boring jobs away, Copilot is absolutely the best way to guide, train and educate humans on how to do new things.”

– By Jillian D’Onfro, Forbes

Continue Reading

Trending