Dissecting a MySpace cookie

May 18th, 2011

myspace_logoI previously looked at the MySpace source code and as an aside, I decided to look at the MySpace cookie placed on my computer through Internet Explorer. I need to spend some more time with it, but I found one tidbit of interest. Here are the contents of that cookie:



Here is an interesting part in Base64:


Here is the Base 64 Translation:

USRLOC=AreaCode=775&City=Reno&CountryCode=US&CountryName=United States&DmaCode=811&Latitude=39.5545&Longitude=-119.8062&PostalCode=&RegionName=NV&LocationId=0

The investigator should be aware that the latitude and longitude is generally based on the IP address geolocation. Again this is something you are revealing to the website when you visit it. The website automatically geolocates the IP address for general marketing purposes. As an investigator you need to be aware that you are exposing this information to the websites you surf. I’ll comment more on geolocation in another post.

Not that we all did not know that companies use tracking codes to identify us, but here is the type of information that might be on a suspect’s system if you go looking for it in his cookies. It also shows how much MySpace is tracking about you during an investigation and collecting about you when you go to a suspect’s MySpace page. I found a nice article at http://helpful.knobs-dials.com/index.php/Utma,_utmb,_utmz_cookies describing some of the cookie’s contents of the cookie.

The cookies named __utma through __utmz are part of Google Analytics, originally by the urchin tracking module, also by the newer ga.js. These cookies track usage on sites that use Google Analytics.”

The article goes on to describe the various pieces of the cookie.

__utma tracks each user’s amount of visits, first, last visit.
__utmz tracks where a visitor came from (search engine, search keyword, link)
__utmb and __utmc are used to track when a visit starts and approximately ends (c expires quickly).
__utmv is used for user-custom variables in Analytics
__utmk – digest hashes of utm values
__utmx is used by Website Optimizer, when it is being used

Another good description of the Google Analytic cookies and their contents can be found at MoreVisibility (A marketing website). There are many other sites that collect similar information such as NetcraftAlexa, and WMtips (each of these can be accessed from our free Internet Investigators Toolbar.

The __utma cookie appears to be a string with six fields, delimited by a “.”. The last field is a single integer which records the number of sessions during the cookie lifetime

Here are the various pieces of the cookie with the date and times translated:

Cookie Code Section Date and Time Translation*
Fri, 22 October 2010 13:23:15 -0800
Fri, 15 October 2010 13:23:15 -0800
Mon, 14 October 2030 13:54:22 -0800
Thu, 14 October 2010 13:52:03 -0800
Sun, 14 October 2012 13:23:15 -0800
Fri, 15 October 2010 13:23:15 -0800
Fri, 15 April 2011 01:54:24 -0800
Thu, 14 October 2010 13:54:24 -0800
Fri, 15 October 2010 13:53:15 -0800
Fri, 15 October 2010 13:23:15 -0800
Thu, 14 July 2011 23:00:00 -0800
Fri, 15 October 2010 13:23:16 -0800

*Decoding of the dates and times are thanks to the free “Dcode” tool by Digital Detective.

Todd Shipley is Vere Software’s president and CEO.

Dissecting a MySpace page

May 17th, 2011

myspace-300x81Having not seen this done anywhere else, I decided to look at some basic MySpace pages at random and determine if I could find anything in the source code that might be of any investigative interest.

In general, the source code of a MySpace page has lots of HTML code, but much of it is of no use to the investigator because it does not identify the user or provide investigative leads. There are, however, a couple of interesting things to be found if you look for them.

The actual server location of an image file

Images on a MySpace main page are not embedded in the page. They are linked to a separate web address at www.msplinks.com. Here is a real example randomly gathered from a MySpace page of an image that was on the page:


This highlighted portion of the code which is obfuscated and is actually encoded in Base64:


The Base64 translation of this portion of the code is:


The Base64 translated link contains the friendID of the page it is from and what appears to be a uniquely assigned imageID.

The www.msplinks.com address is just a white page when you go there. However, when you look at the source code for this page you see some “old school letters” spelling out myspace.com:


Embedded video files and their original location

If you right click on an embedded video and select “copy embedded HTML” and paste that into a separate document, you can review the code and find the video location.

Actual example of an embedded video from a random MySpace page:

<imgsrc=”<object width=”640″ height=”390″><param name=”movie” value=”http://www.youtube.com/v/Xz2MWedTbP0&hl=en_US&feature=
player_embedded&version=3″></param><param name=”allowFullScreen” value=”true”></param><param name=”allowScriptAccess” value=”always”></param><embed src=”http://www.youtube.com/v/Xz2MWedTbP0&hl=en_US&feature=
player_embedded&version=3″ type=”application/x-shockwave-flash” allowfullscreen=”true” allowScriptAccess=”always” width=”640″ height=”390″></embed></object>

The actual page location on YouTube of the embedded video from above example:


Finding the FriendID

I also found the MySpace FriendID in several different locations in the pages source code. A simple search for “FriendID” will find the numerical Friend ID used by MySpace.

Here is a random example of a FriendID found in MySpace source code:

var MySpaceClientContext = {”UserId”:-1,”DisplayFriendId”:281346014,”IsLoggedIn”:false,”FunctionalContext”:

This is the Myspace ID # that corresponds with the MySpace user name:


Add the Friend ID to the MySpace URL and it will take you to that friend’s page.


Tracking Code

I also found something of interest to the investigator and a good reason not to use your agency/company computer network to look at a MySpace page. Without much effort I found the code for MixMap. MixMap is tracking code that can be used to identify the IP addresses of anyone viewing a MySpace page. You can register at www.mixmap.com for access to your account and to prepare unique code for insertion on your MySpace page.

In a real example I found the following tracking code located in the MySpace page’s source code:

<a href=”http://www.msplinks.com/MDFodHRwOi8vd3d3Lm1peG1hcC5jb20v”
target=”_new” title=”MySpace Tracker”>
<img src=”http://www.mixmap.com/661165/no_image_tracker_strict.jpg” border=”0″ height=”1″ width=”1″ style=”visibility:hidden;” alt=”MySpace Tracker” /></a></style></span>

<a href=”http://www.msplinks.com/MDFodHRwOi8vd3d3Lm1peG1hcC5jb20v” target=”_new” title=”MySpace Tracker”><img src=”http://www.mixmap.com/661165/no_image_tracker_strict.jpg” border=”0″ height=”1″ width=”1″ style=”visibility:hidden;” alt=”MySpace Tracker” /></a></style></span>

This portion of the code is actually encoded in Base64:


The Base64 translation of this portion of the code is:


MySpace beacon data

Another thing I found a little disturbing about MySpace was what it is collecting on its pages. I located the following code labeled MySpace.BeaconData, which indicates that MySpace appears to be tracking persons viewing MySpace pages. Not that this is unusual from a marketing point of view. But the investigator should be aware that s/he is being tracked.

In the abbreviated random example below, you can see in the bolded portions the city, state and country I am coming from, as well as my computer’s operating system and the version of Internet Explorer I was using.

“dmac”:”811″,”uff”:”0″,”uatv”:”br=MSIE 8.0&os=Windows NT 6.1“,”sip”:”170659174″,”uid”:”-2″,”pggd”:

In the following abbreviated random example I used the Tor network to hide myself, and you can still see (in the bolded portions) the city, state and country the Tor exit node was located:

“0″,”uatv”:”br=MSIE 8.0&os=Windows NT

In this example the Tor exit node just happened to be in Illinois. From an investigative standpoint, the investigator should know what s/he is exposing to the target website.

I’ll continue to review pages and comment as I find anything interesting. If anyone else has any good tidbits about MySpace or any other social networking sites let me know in comments.

Todd Shipley is Vere Software’s president and CEO.

Where’s the WebCase 30-day demo?

April 21st, 2011

3In recent weeks, we’ve gotten a number of questions about why our 30-day demo is no longer available for download, and how investigators can get to know WebCase without it.

To answer the second part first: we found that our customers had a much better experience with WebCase when they used it after a walk-through. That’s why we take you through a one-hour webinar — you can either register for one of our monthly demos, or contact us to set up a time that is convenient for you and your team.

As for the software demo itself, we’ve recently made changes to WebCase that necessitated our retooling the demo. We don’t have a firm launch date, but we’ll let you know when we do.

Meanwhile, please do register for a webinar demo (be sure it’s a WebCase demo, though we’d love to see you for our Online Investigation Series too), and be sure to ask us if you have any further questions for us!

Twitter is officially now Creepy

April 5th, 2011

Okay, this is a play on words, but it really is getting creepy. Yiannis Kakavas, social media fanatic and software writer, has published a new free tool to scare the pants off of any sane Twitterphile. But if you are updating your Twitter page that much, you probably won’t really care.

Kakavas’s new tool, “Creepy,” is a social networking search tool — or in his words, a “geolocation information aggregator.” But unlike just any search, Creepy searches for where you have posted from, then figures out the posts’ longitude and latitude and makes a pretty map of where you have posted from each time. Can you say “stalker nirvana”?

Now this requires that you have turned on Twitter’s own geolocation service, or used some device (your smartphone) or web service (Foursquare, Gowalla, etc.) that collects your lat/long when you are posting. So, Kakavas’ tool is not collecting anything you haven’t already put online yourself. It just makes it easy for the investigator to get to.

Well, as I have posted before, where there is a great tool for stalkers there is a great tool for investigators. So let’s take a look at this new investigative tool.


Again, this is a simple to use tool. Go to the download page and download the Windows Executable or the Ubuntu version and install on your operating system. The Windows installer is quick and easy and it will have you investigating in no time.

Start Creepy and in the settings authorize it to use your Twitter account. (You do need a Twitter account, but many investigators set up accounts purely for investigative purposes.) Now you can search Twitter users or Flickr users, along with photos from many other online applications. I searched both and easily found the users I was looking for.

Then click on the big “Geolocate Target” button. Under the “Map View” tab, the found lat/long coordinates will be displayed, along with their location on a mapping tool of your choice (there are several different mapping tools, including Google, to choose from).

It may take a few minutes to complete the search, but the results can be very revealing. Just as call detail records from cell phones can help investigators map out a suspect’s or victim’s movements over a period of weeks – including their normal patterns, and departures from normal – Creepy’s maps can show patterns of behavior with regard to social networks. The longer you track these patterns, the better picture you will have of your target.

It’s that simple… or it’s that Creepy.

Do you use Creepy? What have your experiences been?

Using NodeXL for Social Networking Investigations

March 4th, 2011

nodexllogoMapping social network users is nothing particularly new. Social scientists use it to compare people’s networks online and offline, and thanks to tools like Loco Citato’s MySpace, Facebook and YouTube Visualizers, investigators have a valuable tool for finding criminals and their associates.

Complementing Loco Citato’s excellent tools is an open-source application called NodeXL, which maps Twitter, Flickr and YouTube users. A book about it from Elsevier, “Analyzing Social Media Networks with NodeXL: Insights from a Connected World,” talks about the tool’s social-science value. But whether law enforcement or corporate investigators are using NodeXL is unknown. (If you use NodeXL or have heard of other investigators using it, please let me know.)

Perhaps the most striking fact about NodeXL is that Microsoft made the tool. Licensed under the Microsoft Public License (Ms-PL), NodeXL is available on the open source download site CodePlex.

NodeXL stands for Network Overview, Discovery and Exploration for Excel – yes, that is correct, Excel, which is the engine that runs the graphing. NodeXL is a template for Excel 2007, although it also works in Windows 7.

Crunching large datasets for social maps

Most of the information that appears to be available online so far about NodeXL regards its ability to easily graph data input into the spreadsheet. As social researchers put together relationships between users, the graphing ability allows the researchers to sift through large amounts of data from a social networking site and find associations that might have been missed.

For the few social networks it collects data from, it is quick and very powerful. Flickr, Twitter and Youtube are the only ones programed directly into the template at this time. Some blogs, including Marc Smith’s (one of the authors of a book on NodeXL), mention that Facebook is in the works for inclusion with NodeXL. Hopefully other social media sites will be added as this tool matures.

To test what NodeXL can do with a Twitter account, I used my own, @Webcase. (Please note: you do not have to be logged into an account to use NodeXL.)

Very quickly NodeXL collected a list of the Twitter users being followed by “@webcase”. For visual fun, Excel also makes a graph of the followers (it takes a few settings to get the pictures into the graph—but once you know how, which took me a little research to figure out, it is pretty easy).

Of interest is the number of followers each user has, how many they are following, the number of tweets they have posted, their time zone, when they joined Twitter and the link to their Twitter page.

Pulling information about videos posted on Youtube is one of NodeXL’s excellent features. Let’s say you have an investigation where a particular term or name is used. You can enter that name in to the Youtube video selection and get a list of videos, with the link to those videos, in a usable spreadsheet. Flickr searches are similar: you can search for image tags as well as Flickr users.

The real power of NodeXL, and the reason (besides its price tag) it is so popular among researchers and academics it, is its ability to graph associations. If, for instance, you select a Twitter user to download and choose options to obtain data on both followers and following along with any tweets that mention the user, you can collect a lot of data that can then be used to show associations. Associations for investigators = leads, witnesses or possibly even suspects.

By using the dynamic filters within NodeXL, you can limit the graph’s view to fewer contacts by increasing the requirement for the number of contacts (tweets, retweets) the associations have.

Another plus about NodeXL: it has an active community working on this open source tool, and updates come out regularly.

For more information

A great primer on analyzing social media networks with NodeXL, “Analyzing Social Media Networks: Learning by Doing with NodeXL,” is available from the University of Maryland. (The posted copy on the UMD website says “Draft” and “Please do not distribute”. What? Do they know what the Internet does in Maryland?). Despite that, it is a good guide to some of NodeXL’s more esoteric graphing uses. For our purposes I’ll cover some of the quicker applications from an investigative standpoint.

If you are interested in finding out more about NodeXL, plug it into Google and you’ll get enough responses to keep you busy. Here are a few more references to get you started:




How the bad guys use social media: An interview with Todd Shipley

February 28th, 2011

Hardly a day goes by when the news isn’t reporting criminal use of social media to find and groom victims, start and fuel gang wars, or exploit other weaknesses. Todd Shipley joined Spark CBC host Nora Young last week to talk about some of these issues, along with how police can use social media to find the activity.

Listen to the 20-minute interview now to find out:

  • How criminals exploit their victims’ weaknesses, along with their own need for social connections
  • The importance of looking beyond the physical crime scene to its virtual extension
  • The social and technical skills police need to document online and other digital evidence before it gets to detectives
  • How online or cloud investigation is similar to network forensics (and unlike computer forensics)
  • What legal requirements police need to abide by when they go online

Got questions about Todd’s interview? Leave us a comment!

Tracing IP Addresses: Q&A

February 18th, 2011

We were very pleased to welcome back Dr. Gary Kessler to our “Online Investigations Basics” webinar series this week. Once again Dr. Kessler discussed some of the background and tools relevant to tracing IP addresses. Below is his companion presentation:

During the session, we took several questions from some of our listeners. One person asked whether tracing IP addresses overseas was any different from tracing them domestically. Answer: not technically; the overall process remains the same, but whether American investigators can secure foreign cooperation is a different question. The best bet is for investigators to contact legal representatives in American embassies for help dealing with law enforcement in another country.

Another participant asked whether TCP/IP packets would provide information on what kind of device accessed the Internet; in a related question, someone else asked if MAC addresses from two devices could show that they had been communicating with one another.

By themselves, packets contain no information on the type of device communicating. A device or router is needed to show where an IP address was assigned; the same is true for tracing IP addresses past a private network. And as for MAC addresses, they have only local relevance, not end-to-end applicability.

We wished we could have gotten into more detail about this question: the biggest challenges with tracing IP addresses in the cloud. As the load of traffic increases, and IPv4 addresses diminish (before IPv6 takes hold), more ISPs will begin to allow shared IP addresses. On the flip side, multiple IP addresses will be resolved to single devices.

Again, we’re grateful to Dr. Kessler for taking the time to help educate the community on a complex issue. Have questions? Please contact us. And we’d love to see you at our future “Online Investigations Basics” webinars. In another few weeks, Cynthia Navarro will be talking about online sources of information. We hope you’ll join us!

Our Online Investigations Basics webinar series is back!

February 3rd, 2011

We’re excited to announce the return of our popular, free “Online Investigations Basics” webinar series! Designed to help investigators maximize their online evidence collection skills, the monthly webinars will feature investigative techniques and issues such as:

  • Tracing IP Addresses
  • Online Sources of Information
  • Online Identity Theft Investigations
  • Internet Relay Chat (IRC) Investigations
  • Investigating Social Media

The webinar series builds on the original series, offered in the fall of 2009, by offering both new courses and updated content from some returning instructors as well as new voices. Established experts in their fields, the Online Investigation Basics instructors will take questions from, and interact with, webinar attendees during a structured Q&A period within each 60-minute presentation. The webinars are meant for investigators from all sectors — law enforcement, corporate and independent.

In addition, we’ll continue to provide our monthly WebCase webinars, which allow investigators to get to know our software when they can’t attend our on-site training.

The first Online Investigation Basics webinar is Thursday, February 17. Dr. Gary Kessler of Gary Kessler & Associates will present “Tracing IP Addresses,” in which he will introduce concepts about the TCP/IP suite, the Internet, IP addressing and domain names, and the administration of Internet names and numbers. He will also demonstrate tools to support IP tracing.

Want to know more? Sign up today!

Data retention vs. criminal anonymizer use

February 2nd, 2011

This week, German authorities released data suggesting that Internet Service Provider (ISP) data retention policies – which the United States hopes to implement – could actually have a negative impact on online crime fighting.

Why? As the article puts it:

This is because users began to employ avoidance techniques, says AK Vorrat. A plethora of options are available to those who do not want their data recorded, including Internet cafés, wireless Internet access points, anonymization services, public telephones and unregistered mobile telephone cards.

The European Union is looking at policy changes that protect both privacy and public safety. In the meantime, however, we know that “hard core” criminals will continue to use anonymization technology, and it will take more than policy to address this.

That’s why we’re pleased to announce that the National Institute of Justice has awarded us funding under its Electronic Crime and Digital Evidence Recovery grant. The funding is for the development of forensic and investigative tools and techniques to investigate criminal use of Internet anonymizers – tools that law enforcement doesn’t currently have.

We’ll be working in conjunction with researchers at the University of Nevada’s Department of Computer Science and Engineering on the development, while investigators from the Washoe County Sheriff’s Department will test the software and offer their feedback. Meanwhile, we’d love to hear from you. What has your experience been trying to investigate online crime despite anonymizers?

Simplifying the webmail collection process

January 13th, 2011

A recent ComputerWorld article discussed the security problems posed by webmail within organizations. In short, because webmail comes across HTTP rather than SMTP protocols, the organization does not protect against data leakage as it does from its own email system.

The reasons for this are many. In 2008, ComputerWorld ran an article that discussed ways webmail could breach even organizations with strong security. As always, the human factor can be a challenge. Well-meaning employees may use webmail to segregate business from personal email, when they are required not to conduct personal business on company accounts; employees may also use webmail to bypass overly complicated email security procedures.

At that point, even if employees’ personal webmail accounts aren’t being archived per the law, their email may become discoverable in the event of litigation. How to document the emails’ content?

In an October 2009 article for EDEN: The Electronic Data Extraction Network, Jonathan Yeh discussed various ways in which webmail could be captured for archival purposes. Among them:

  • Download the email locally using an email client with a POP or IMAP protocol. It can then be searched just like other digital evidence.
  • If these protocols cannot be used, screenshots, web page capture, or even printing.
  • Obtain data via browser artifacts.

Each of these methods is, however, complicated. Yeh goes into these issues in some detail, ending with the need to document each step of the collection process. While true that the courts accept expert testimony together with downloaded or screenshot data, there is still nothing about these collection methods to prove that the content was not manipulated in any way.

In addition, the procedures Yeh describes, along with some of the issues that the investigator must take into account, are time-consuming. Under such conditions, the margin for human error is greater, and as Yeh concludes, “The reliability of evidence can often only be gauged by the reliability of the methods used to collect it, and proper documentation can be the difference between admissibility and inadmissibility in court.”

Simplifying the “screenshots and web page captures” process, and in doing so addressing the reliability issue that Yeh brings up, is WebCase. That it is currently the only tool to do so should not be lost on e-discovery experts or other investigators.

Want more information? Schedule your free demo today!