Risk
8/29/2013
05:33 PM
Dark Reading
Dark Reading
Products and Releases
Connect Directly
RSS
E-Mail
50%
50%

Social Networks: Can Robots Violate User Privacy?

High-Tech Bridge experimented to verify how the 50 largest social networks, Web services, and free emails systems respect – or abuse - privacy of their users

Recent news in the international media has revealed numerous Internet privacy concerns that definitely deserve attention and further investigation. This is why we, at High-Tech Bridge, decided to conduct a simple technical experiment to verify how the 50 largest social networks, web services and free emails systems respect – or indeed abuse - the privacy of their users. The experiment and its results can be reproduced by anyone, as we tried to be as neutral and objective as possible.

The nature of the experiment was quite simple: we deployed a dedicated web server and created secret and totally unpredictable URLs on it for each tested service, something similar to:

http://www.our-domain-for-test.com/secret/18354832319/sgheAsZaLq/

Then we used various legitimate functionalities (detailed in the table below) of the tested services to transmit the secret URLs, carefully monitoring our web server logs for all incoming HTTP requests (to see which services followed the secret link that was not supposed to be known and accessed by anyone).

During the 10 days of our experiment, we trapped only six services out of the 50. However, among those six were four of the biggest and most used social networks: Facebook, Twitter, Google+ and Formspring. The remaining two were URL shortening services: bit.ly and goo.gl.

If for the URL shortening services such behavior may be part of their legitimate functionalities, it should not also be the case with social networks such as Facebook and Twitter. Taking into consideration that some of the services may have legitimate robots (e.g. to verify and block spam links) crawling every user-transmitted link automatically, we also created a robots.txt file on our web server that restricted bots accessing the server and its content. Only Twitter respected this restriction, all other social networks simply ignored it, accessing the secret URL.

Below, you can find HTTP requests of trapped services that accessed the secret URLs:

Bit.ly: IP: 50.17.69.56 User-Agent: bitlybot

Facebook: IP: 173.252.112.114 User-Agent: facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)

Formspring: IP: 54.226.58.107 User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31

goo.gl: IP: 66.249.81.112 User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.4 (KHTML, like Gecko; Google Web Preview) Chrome/22.0.1229 Safari/537.4

Google+: IP: 66.249.81.112 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:6.0) Gecko/20110814 Firefox/6.0 Google (+https://developers.google.com/+/web/snippet/)

Twitter: IP: 199.59.148.211 User-Agent: Twitterbot/1.0

Marsel Nizamutdinov, Chief Research Officer at High-Tech Bridge, comments: "The results of this experiment are quite interesting actually. The four trapped social networks justify their activities by “automated verifications”. However, it is technically impossible to verify what is really going on and how the information obtained on the user-transmitted URLs is being used. Today, quite a lot of web applications omit authentication and rely on temporary or unpredictable URLs to hide some content and, when users transfer such URLs via social networks, they cannot be sure that their information will indeed remain confidential. Unfortunately there is no way to keep the URL and its content confidential [if there is no authentication of course] while transferring the URL via social networks."

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Flash Poll
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-0485
Published: 2014-09-02
S3QL 1.18.1 and earlier uses the pickle Python module unsafely, which allows remote attackers to execute arbitrary code via a crafted serialized object in (1) common.py or (2) local.py in backends/.

CVE-2014-3861
Published: 2014-09-02
Cross-site scripting (XSS) vulnerability in CDA.xsl in HL7 C-CDA 1.1 and earlier allows remote attackers to inject arbitrary web script or HTML via a crafted reference element within a nonXMLBody element.

CVE-2014-3862
Published: 2014-09-02
CDA.xsl in HL7 C-CDA 1.1 and earlier allows remote attackers to discover potentially sensitive URLs via a crafted reference element that triggers creation of an IMG element with an arbitrary URL in its SRC attribute, leading to information disclosure in a Referer log.

CVE-2014-5076
Published: 2014-09-02
The La Banque Postale application before 3.2.6 for Android does not prevent the launching of an activity by a component of another application, which allows attackers to obtain sensitive cached banking information via crafted intents, as demonstrated by the drozer framework.

CVE-2014-5136
Published: 2014-09-02
Cross-site scripting (XSS) vulnerability in Innovative Interfaces Sierra Library Services Platform 1.2_3 allows remote attackers to inject arbitrary web script or HTML via unspecified parameters.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
This episode of Dark Reading Radio looks at infosec security from the big enterprise POV with interviews featuring Ron Plesco, Cyber Investigations, Intelligence & Analytics at KPMG; and Chris Inglis & Chris Bell of Securonix.