Demonstrators protesting Ukrainian President Viktor Yanukovych suspected their cellphone location data was being tracked since at least last week, when people in the vicinity of a clash between riot police and protesters received a chilling text message. It read: “Dear subscriber, you are registered as a participant in a mass disturbance.”
The country’s three cellphone companies denied they had turned over subscribers’ location data or had sent the message. Instead, some suggested the “Dear subscriber” text, and others like it sent to other protestors, were the work of hackers using rogue base stations that mimicked those belonging to the carriers. Now, the protesters and civil liberties advocates around the world have cited official confirmation of the cellphone monitoring—a ruling made public on Wednesday formally ordering a telephone company to hand over such data.
From The New York Times:
Protesters for weeks had suspected that the government was using location data from cellphones near the demonstration to pinpoint people for political profiling, and they received alarming confirmation when a court formally ordered a telephone company to hand over such data.
Earlier this month, protesters at a clash with riot police officers received text messages on their phones saying they had been “registered as a participant in a mass disturbance.”
Then, three cellphone companies — Kyivstar, MTS and Life — denied that they had provided the location data to the government or had sent the text messages. Kyivstar suggested that it was instead the work of a “pirate” cellphone tower set up in the area.
In a ruling made public on Wednesday, a city court ordered Kyivstar to disclose to the police which cellphones were turned on during an antigovernment protest outside the courthouse on Jan. 10.
The order applied only to this one site on one day, and did not cover the area of the main protest, Independence Square, where sometimes more than 100,000 people have shown up, most presumably carrying cellphones whose location there could identify them as political opponents of the government.
No doubt smartphones and other types of mobile devices have made life easier for hundreds of millions of people who no longer must be tethered to landlines and computers to communicate. But handsets have also made life easier on the people who carry out government surveillance. It’s trivial for someone with access to carrier servers to sift through data associated with one or more base stations and collect the unique hardware identifiers of each phone that connected to it at a given time. It’s also worth remembering that the practice of collecting “tower dumps” isn’t limited to eastern European countries. According to The Washington Post, a recent congressional inquiry showed that US law enforcement made more than 9,000 requests for tower dumps in 2012.
Early on the last morning of CES, I found myself in a Las Vegas parking lot signing a liability waiver. I was there for a ride in a modified Ford Taurus carrying what could be the future of driving in America: a system that alerts drivers of potential accidents by talking to other cars.
After going through a number of scenarios, the driver of the Taurus pulled up to a simulated intersection. As the light changed and the driver went to pull through, an alarm sounded on the dash and he braked—just as another car in the demo, previously blocked from view by a parked container, shot through its red light.
In the real world, I would have been praying that the side-impact airbags would protect me. But in this version of a future US roadway, I was saved by radio signals sent by the car running the light, alerting the Taurus that a collision was imminent. “When you look at what causes accidents, about 90 percent are due to driver error,” said Michael Schulman, the technical leader for vehicle communications in Ford’s Active Safety Research and Advanced Engineering group. “Mostly drivers are distracted, or they just have bad judgment, or they’re impaired. So this is meant to be a first step to see how we can warn them. The car is always exchanging messages with other cars, and just in that rare case when I need it, I get a warning.”
For the past decade, engineers from a host of auto companies—including Ford, GM, Honda, Toyota, Nissan, Daimler, Volkswagen Group, and Hyundai Kia—have been collaborating on the next step in vehicle safety. The system, called vehicle-to-vehicle (V2V) communications, or Dedicated Short-Range Communications (DSRC), will allow cars to share data that can alert drivers to prevent the most common—and most fatal—multi-vehicle accidents on American roads.
The idea behind V2V is fairly simple, and it’s based on technology that is already part of many new cars—it’s just put together in a different way. Tested in a 3,000 car trial in Ann Arbor, Michigan over the past three years, the system uses a variant of Wi-Fi technology, GPS data, and vehicle data already collected by sensors in many vehicles to broadcast information that can warn other vehicles of a potential crash.
Just how soon this technology will hit the streets is still an open question, however. V2V is largely ready to go. And last year, the National Highway Traffic Safety Administration (NHTSA) seemed poised to mandate the technology in every vehicle. “NHTSA has to make a decision about [if it’s] going to proceed toward a regulation that would require this on new cars,” said Schulman.
But that announcement has been held up. And part of the reason may be the leaks from former National Security Agency contractor Edward Snowden and the heightened awareness among both citizens and legislators of government surveillance. “Given what’s happened with the NSA,” Schulman said, “I think they felt, ‘If an announcement came out tomorrow, people are going to freak out.’”
The unfinished roadmap
Officially, the NHTSA would only issue the following statement on V2V: “The Department of Transportation and NHTSA have made significant progress in determining the best course of action for proceeding with additional vehicle-to-vehicle communication activities and expect to announce a decision in the coming weeks.” Sources at the NHTSA say that the agency simply hasn’t finished all of the supporting work needed to support a decision on V2V yet.
When that decision does come, many in the industry believe it will be the first step in a much broader transformation of the nation’s transportation system. It will seed the creation of a network that not only prevents automobile accidents but also turns vehicles into data collectors. It would make it possible for traffic management systems to use vehicle-to-infrastructure (V2I) communications to monitor congestion and deliver information to vehicles that improves traffic flow.
V2V and V2I are also seen as a necessity in the development of truly autonomous cars, allowing robotic vehicles to negotiate with each other to handle traffic flow. They could improve public transportation and provide countless other benefits to both drivers and governments.
But what worries some is that the system could be used to target “bad actors” on the highways, as they were referred to in a recent Government Accountability Office (GAO) report. Cars may become informants on drivers who speed or drive erratically, though the Department of Transportation says that the data will be purely anonymous and not used for those purposes.
There are already some with privacy concerns about location and other data being collected by automakers. Senator Al Franken of Minnesota recently wrote a letter to Ford executives requesting clarification on the company’s practice of collecting GPS data from vehicles. That request came after Ford Global Vice President of Marketing Jim Farley said that for drivers in Ford vehicles equipped with GPS and Sync, “We know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That came at CES, but Farley has since said that Ford does not track customers in their cars without their approval or consent. But the capability to do so still exists, as systems like Sync and GM’s OnStar collect more data behind the scene as part of navigation and safety features.
V2V ups privacy concerns because it essentially broadcasts a vehicle’s location and speed, as well as some information about where a vehicle has been previously, to anyone within range. And while Department of Transportation officials told the GAO that “V2V communication security system would contain multiple technical, physical, and organizational controls to minimize privacy risks—including the risk of vehicle tracking by individuals and government or commercial entities,” regulating who can use V2V data and for what would fall outside the Department of Transportation’s span of control. It would essentially require legislation by Congress.
And even after the NHTSA makes a decision on a way forward with V2V—if it decides there is a way forward—privacy concerns aren’t the only problems left to overcome.
Integrating V2V systems with new car designs may be relatively simple. The current platform for V2V is based on well-understood technology, such as automotive-grade GPS, radio technology based on the underlying standards of Wi-Fi, and event sensors that already report data such as a car’s pitch and yaw. A vehicle’s current warning systems—lights, alarms, and in some cases “haptic feedback” systems such as vibrating seats—are already in production in vehicles equipped with anti-collision radar and other sensors. But integrating V2V into existing vehicles could be complex and expensive.
Getting in gear
In 1999, the Federal Communications Commission designated a band of radio frequency around 5.9GHz for use in communication between cars and infrastructure. It’s also a range used by satellite and military communications systems, so it’s not exclusively the domain of V2V, but the limited range of the communications systems reduces concerns about interference.
While the spectrum was there, development of standards for V2V didn’t begin in earnest until 2002. That work has focused on the 802.11p wireless protocol—a modification of the 802.11 standard used by Wi-Fi wireless networks.
Using 802.11p broadcasts from omnidirectional antennas, V2V systems have a range of about 250 meters (a little more than 820 feet). “We send out a short message on one channel of that band that says ‘here’s my position, here’s my speed, here’s my yaw rate, and here’s my acceleration—the vehicle’s state,’” said Schulman. “You actually keep sending it out even if there’s no one around you.”
That constant broadcast could be useful for a number of applications with goals beyond just preventing collisions. “Your car could act like a traffic probe and report back to a central traffic management center—some anonymized data that says ‘here’s the path I took and how long it took me to get there,’” said Schulman. “That could be aggregated, and traffic centers would have a lot of real-time information about where congestion is. They could use that to time the lights, to do ramp metering, to give people better routes. We could connect up public transportation so people could look at where they want to go, what their choices are, how long each would take, and how much it would cost—all based on real information.”
In 2012, the NHTSA sponsored a large-scale test of V2V and some V2I technology in Ann Arbor, Michigan. A total of 3,000 cars, some with factory-installed V2V systems and some with systems integrated “after market,” were let loose on the streets of the city for normal use. “It worked pretty well,” said Schulman. “There were some issues, but it was very early technology.”
If the NHTSA goes forward with developing a regulation on V2V—which could be defined by 2016—systems could be a requirement in most cars by 2018. And that would be the impetus for a revolution in infrastructure systems, Schulman said. “My sense is that it will grow like hotspots grew when laptops started having Wi-Fi built in—cities and countries will start to invest in it. Some places will be more progressive than others. It may not be in Montana as quickly as Boston, but eventually we’ll see it around the country.”
The Federal Highway Administration is on board with that thinking to the tune of $45 million, funding a joint auto industry program to start developing V2I applications. Some proposed uses include a system that communicates with city parking systems, using travel data and destination information to determine when a car will arrive and reserving a parking spot for it.
The biggest hurdle most cities and counties will face in installing these kinds of systems is likely the financial hit. The GAO study noted that the cost of roadside V2I equipment “could range between $25,000 and $30,000 per installation.” And that figure doesn’t include the backend costs of operating and maintaining them.
But there are a number of limitations to V2V technology that could create challenges, both from a safety and policy standpoint. One of them is simply preventing the system from being hijacked by bad data.
Certified to drive
Creating a national requirement for even just some classes of vehicles to carry V2V systems will create the need for a whole new level of national information infrastructure. This would make sure the system isn’t used maliciously to spoof traffic data, creating false alerts or potentially even accidents.
The security design of the system as implemented in tests so far will require a national certificate infrastructure much like that used for preventing domain spoofing and securing the Web. It will require a database of certificates—like the X.509 certificates used in public key infrastructure (PKI)—to verify that devices are legitimate and make it possible to rescind permissions to ensure that no one can send out spoofed messages. If a certificate were to become compromised or if a manufacturer misconfigured a batch of V2V systems, the certificate authority would be able to revoke the associated certificate. This prevents spoofing much in the way that DNS SEC prevents the “poisoning” of Internet domain address tables by a rogue Domain Name Service server.
The problem is that no one has ever developed a PKI system large enough to handle every vehicle in the United States—every car, truck, bus, and motorcycle. The revocation table for expired or compromised certificates would have to be distributed constantly to cars to make sure they weren’t victimized by recorded data attacks or other systems that used hacked hardware to spoof traffic.
So far, there hasn’t been any agreement yet on how this PKI would distribute its certificates. Proposals have included having roadside systems issue certificates as vehicles drive by and having certificates sent to vehicles out-of-band over cellular connections. The latter would mean that every car in the country would have to have its own integrated cellular phone or that drivers would have to connect their phones regularly to the systems to ensure they didn’t get shut out of the network.
The certificates in the system would theoretically be anonymous. They would not be tied to specific identifying information about drivers or their vehicles. Even so, certificates could be potentially used to create “fingerprints” for vehicles passing through a V2I network’s mesh of antennas even without an explicit connection to the driver. Certificate information could be combined with traffic cameras and other sensors, connecting the certificate to other data including their speed, lane changing behavior, or even travel routes.
Other questions include who is put in charge of the certificate system and who pays for it. Putting it under federal control would allow for more direct regulation of privacy, but it would be hugely expensive. And because there’s no real idea of what the capacity requirements will be for communication between cars and the certificate system, and because it’s not known who will operate it, the GAO found, “it is currently not only difficult to estimate the potential costs, but unclear who or what entity—consumers, automobile manufacturers, the Department of Transportation, state and local governments, or others—would pay the costs. Determining who or what entity will fund the system will likely prove challenging.”
Another, potentially larger problem facing V2V and V2I is the threat to the spectrum that the services would be tied to. The President’s Council of Advisors on Science and Technology has pushed to take the spectrum away from the Department of Transportation and free it up for use in broadband wireless Internet service. As part of the Middle Class Tax Relief and Job Creation Act of 2012, Congress required the National Telecommunications and Information Administration to look at ways that the spectrum could be shared with unlicensed users in the 5.9GHz band.
There’s also the problem with GPS technology itself. In urban environments, GPS fixes can be difficult at best, with buildings blocking parts of the constellation of satellites overhead at any time. That may not be a problem for some collision avoidance portions of the system—the vehicles in range of each other will have the same GPS data, and as a result, the same distortion of location to deal with. But “shadows” in GPS data could cause frequent errors in other systems dependent on the data, creating havoc in traffic flow systems as well as holes in safety coverage at intersections where the data might be different around the corner.
Still, the biggest blind spot remains the privacy of individual drivers. And even before the era of super-connected cars begins, we’ve seen hacks that impact braking and speed and others that enter through things as simple as tire pressure monitors. So while the Department of Transportation has sought to address user security and privacy as much as it can, this effort will require more. Vehicle owners need to know that companies or local governments won’t be able to use the data from V2V to track cars like website users, and that is going to be a hard sell in this post-Snowden world.
Researchers working for BT and Alcatel-Lucent announced that they have created a broadband technology able to hit 1.4 terabits per second. This speed allows downloading 44 HD videos in one second. The most important part is that it can all be done on the existing fiber network in London.
BT researchers explained that they used a so-called “flexigrid” infrastructure, which created a “super channel” consisting of seven 200 Gbps channels. All of them were combined to provide a total capacity of 1.4 tbps. The researchers of BT and Alcatel-Lucent also reduced gaps between the transmission channels and increased their density by 42.5% in the efficiency of data transmission compared with current standard networks.
According to Alcatel-Lucent optical marketing leader, the technique could be compared to decreasing the space between lanes on a busy freeway, which would allow more lanes of traffic to travel on the same road. The researchers conducted the test on a 410-kilometer fiber link between central London and Ipswich last fall. The tech giant believes it could help it to meet consumer and business demand for increased bandwidth.
Unfortunately, this is all still backbone and core network stuff, so it won’t change the speeds you can receive at home – traffic of individual users still has to go through the last mile bottleneck.
Former Norwegian Socialist Left Party minister Baard Vegar Solhjell and his party colleague Snorre Valen, have written to the Norwegian Nobel Committee and nominated Edward Snowden for the Nobel Peace Prize
“He has contributed to revealing the extreme level of surveillance by nations against other nations and of citizens,” Solhjell said on Wednesday, explaining his move.
“Snowden contributed to people knowing about what has happened and spurring public debate” on trust in government, which he said was “a fundamental requirement for peace”.
In a letter to the Norwegian Nobel Committee obtained by Agence France-Presse (AFP), Solhjell and Valen said they do not necessarily condone or support all of Snowden’s disclosures, but praised him for revealing the “nature and technological prowess of modern surveillance”.
“The level of sophistication and depth of surveillance that citizens all over the world are subject to have stunned us, and stirred debate,” they wrote in the nomination letter.
They added that Snowden’s actions have “led to the reintroduction of trust and transparency as a leading principle in global security policies”.
US National Security Agency documents leaked by Snowden in 2013 revealed widespread surveillance of individuals and institutions in the US and around the world.
According to the whistleblowing website Wikileaks, Snowden, now living in Russia, had applied for asylum in several countries, including Norway.
Solhjell, who was environment minister until Norway’s left-wing government lost power last year, told AFP he was aware of Snowden’s reported request for asylum and that it should be handled according to normal procedures.
“This matter has not affected our decision to nominate Snowden for the peace prize,” Solhjell said.
The deadline for submitting nominations for the 2014 peace prize is February 1.
Among those eligible to forward nominations are politicians and barristers around the world, as well as university professors from certain disciplines.
According to the researchers at Princeton University, the most popular social network in the world, Facebook, has spread like smoke, but now people slowly become immune to its attractions. The predictions are that the platform will be largely abandoned by 2017.
The expectations of Facebook’s impending doom build on comparing the growth curve of epidemics to those of online social networks. The researchers believe that, like bubonic plague, Facebook may also eventually die out.
Facebook celebrates its 10th birthday this week and has survived longer than its many rivals like Myspace and Bebo. However, the Princeton forecast states that the network will lose 80% of its user base within the next several years.
The researchers have based their prediction on the number of times the word “Facebook” was typed into Google search. According to Google Trends charts, Facebook searches peaked a year ago and have since been slowing down. The researchers explained that ideas, just like diseases, have been spreading infectiously between people before dying out one day. This model is successfully described with epidemiological models and can be applied to online social network dynamics. The matter is that ideas are spread between people who share ideas with each other, but once idea manifesters lose interest with it and no longer manifest the idea, they get “immune”.
Four months ago, Facebook reported almost 1.2 billion monthly active users, and the company is due to update investors on its traffic numbers soon. Although desktop traffic to the service is reported to be falling, it can be explained by the fact that people now mostly access the network via their mobile phones.
The researchers used a “SIR” (susceptible, infected, recovered) model of disease for their study. The latter creates equations to map the spread and recovery of epidemics. Different equations against the lifespan of Myspace were tested before being applied to Facebook. The former network was created in 2003 and reached its peak in 2007 with 300 million registered users, but fell out of use by 2011. Acquired by News Corp for $580 million, Myspace soon signed a $900 million deal with Google and was once valued at $12 billion. However, in the end it was sold by News Corp for as little as $35 million.
Well, the 870 million users who access Facebook via their mobile phones can easily explain the drop in Google searches – they don’t have to type the word Facebook into Google to log on, because they have mobile apps now. Still, Facebook has officially admitted that during the previous 3 months they did see a decrease in daily users, especially among younger teens. However, the company’s investors are quite happy with Facebook’s share price, which reached record highs this month, valuing the social network at $142 billion.
The world’s largest book publishers have recently sued Hotfile, demanding up to $7.5 million from the now defunct file-hosting portal. In the meantime, it is unknown whether Hotfile has any money left in the bank.
Back in December 2013, Hotfile and the Motion Picture Association of America ended their legal dispute with an $80 million settlement. Although this agreement allowed Hotfile to continue its operations after enforcing a filtering mechanism, the service preferred to shut down. But this move doesn’t necessarily mean the trouble is over for Hotfile: now, inspired by Hollywood’s multi-million dollar victory, a number of the book publishers launched a lawsuit against the website. They claimed that Hotfile had built its business off of copyright infringement and their rights were massively violated by the service and its operators.
50 books have been submitted as evidence, and book publishers demand compensation for them. Overall, Hotfile is facing up to $7.5 million in damages. Actually, the complaint itself is nothing new – it just repeats the arguments previously made by the MPAA, for example, that Hotfile was aware that their service was used to infringe copyright. They pointed out that the company received millions of DMCS takedown notices and knew that users were migrating to Hotfile for copyrighted content after RapidShare was sued.
Then the publishers accused file-hosting service of failure to delete infringing files from its servers, and claimed that Hotfile lacked a repeat infringer policy, failing to ban repeat infringers who accounted for a large percentage of the infringing files. Although Hotfile received lots of DMCA notices, it didn’t bother to track whether any of files came from the same user.
In result, a small group of persistent infringers managed to upload millions of infringing files. The plaintiffs claimed that by early 2011, almost 25,000 users got at least 3 DMCA notices, while many had received over 100. They accounted for 50 million uploaded files, which made up to 44% of all files hosted by Hotfile.
If you remember Hotfile’s legal history, this case is quite strong and may in part explain why the book publishers chose Hotfile as a target. The only question is whether the defendant still has money left to compensate for any damages.
To those who say that self-driving cars have nothing to do with Google’s core business selling ads, listen up: Google was just awarded a patent for an ad-powered taxi service.
The patent, which was first spotted by TechCrunch, would allow advertisers to offer potential customers a free ride to their place of business. This would solve one of the biggest problems for brick-and-mortar retailers: getting customers to their location. The system would offer free or discounted transportation based on an algorithm-powered decision-making process involving the user’s current location, the cost of transportation, and the potential profit from a completed sale. The concept is basically a “free ride coupon” and mentioned transportation modes like taxis, trains, buses, or even autonomous vehicles.
The ads would be displayed either via cell phone (which could detect the user’s current location) or from a stationary kiosk in a public area. The ads would, of course, be highly targeted. In the smartphone example, Google would identify the user by their phone, and for the public kiosk example, a user would be asked to identify themselves (“Sign in to your Google+ account!”). The system would track how often you use the discounted transportation to make a purchase, and if you bum too many free rides without buying something, advertisers may not offer you a ride next time. The patent also mentions that users could offer up information about themselves, like who else is with them, to get recommendations from advertisers and browse the discounted transportation catalog. For instance, two adults may want to go to dinner, while an adult and child might want to go to a family friendly establishment. Advertisers would bid against each other, just like they do for Google ads, based on your purchase history and other factors in your profile.
It’s unclear if Google plans to implement something like this soon or if the idea is part of a post-driverless-car utopia. For what it’s worth, the patent was cooked up by members of Google’s driverless car division. An advertiser paying for a bus, train, or taxi might be a little too expensive today, but imagine a self-driving electric vehicle, where driving around doesn’t burn gas or involve paying a taxi driver, and suddenly transportation becomes a lot cheaper. Perhaps it would eventually be cheap enough that your local business would pay for your taxi fare, as long as you promise to buy something.