Skip to main content

Topic: Browser security paranoid privacy panic (Read 7827 times)

  • ersi
  • [*][*][*][*][*]
Browser security paranoid privacy panic
Transifex website, where many open-source distros and other projects host their translation environment, has made some alterations to the way traffic occurs after users log in. There are no good cookies anymore that I can trace. I had to disable adblock to be able to stay logged in.

Is this a wider trend?

When you log in to websites, what security measures do you take? Do you check/modify the headers and the referrer that your browser sends to the server? Do you take a look what headers the server replies with? Do you count the cookies? Is your browser set to erase history at close? Share your habits and tips.

  • ultraviolet
  • [*]
Re: Browser security paranoid privacy panic
Reply #25


wouldn't a extension like 'Ghostery' go a long way in stopping most of this?

Ghostery and Adblock, as companies, make money by collecting and sharing (selling) information on what people like to block


really? i couldn't find anything like that in these privacy statments, do you have a link i can read about these extensions lieing to us?

Ghostery's privacy statement:
https://www.ghostery.com/en-GB/privacy-addon

in AdBlock's FAQ:
Your browser may require AdBlock to ask for permission to access your browsing data so it works on all tabs in your browser. AdBlock won't save or retrieve your personal browsing habits or information for any reason beyond what is required to make it work. AdBlock is entirely supported by voluntary donations from users like you, and collects no information for advertising or promotional purposes.

in AdBlock Plus's FAQ:
Adblock Plus stores some data in the Firefox profile on your computer. Adblock Plus never transmits this data to any servers, but other extensions and services, such as Firefox Sync, may do so. Most of the data (your preferences, filter subscriptions and custom filters) is unobjectionable privacy-wise. However, filter hit statistics and recent issue reports could be used to reconstruct your browsing history. Adblock Plus treats this information identically to how history data is treated by the browser: this data isn't stored if you are using Private Browsing mode and is removed if you choose to clear your browsing history.
"I kill monsters and zombies with infeasibly large plasma-based weaponry"

  • ersi
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #26
Ghostery Makes Privacy Marketable
Quote
The anonymized data Ghostery receives if users choose to opt in to its GhostRank service turns out to be valuable to businesses....

Ghostery then sells the data to clients such as Proctor & Gamble so they can better understand how their online marketing efforts are working or failing. If lots of Ghostery users are blocking a particular service, that might be a sign to work with a different provider or to take a different approach.

Now, that the monetizable data is obtained exclusively via GhostRank and not via other aspects of Ghostery is a plain claim, subject to falsification. If I had the time, maybe I would look closer at the code in Ghostery to dis/prove it, but it's so much easier to simply live without Ghostery. This claim is directly inviting a verification.

And Adblock's autoupdating is also the classical "calling home" issue. I have not cared to monitor my data traffic with minute precision, but there are many reasons why I don't like autoupdates.

  • ultraviolet
  • [*]
Re: Browser security paranoid privacy panic
Reply #27
i suppose its a catch 22 situation really, you install ghostery to stop ad companies tracking your browseing habits and maybe it sells your data to a company to help them not be tracked so it gets through ghostery and on to your page.
i might of installed the android ghostery browser on my tablet, but only as its a great light-weight program, i'm more concerned having a good adblocker realy, so whoever has my data has no way of using it on me for personalised ads .

have you tried out electronic frontier foundation's 'https everywhere' addon? and is it worth a try?
"I kill monsters and zombies with infeasibly large plasma-based weaponry"

  • ersi
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #28

have you tried out electronic frontier foundation's 'https everywhere' addon? and is it worth a try?

I remember I have thought about it on PS, but not on Android. I have not tried it though. I was reading about Tor and HTTPS Everywhere at the same time. I gave a try to Tor.

I still think that urlfilter.ini type of thing is best. Adblock works most closely this way.

  • ultraviolet
  • [*]
Re: Browser security paranoid privacy panic
Reply #29


have you tried out electronic frontier foundation's 'https everywhere' addon? and is it worth a try?

I remember I have thought about it on PS, but not on Android. I have not tried it though. I was reading about Tor and HTTPS Everywhere at the same time. I gave a try to Tor.

I still think that urlfilter.ini type of thing is best. Adblock works most closely this way.


The Tor browser is great, downloaded it a little while back for windows, I don't have windows anymore though but I'm sure it featured https everywhere in the addons
"I kill monsters and zombies with infeasibly large plasma-based weaponry"

  • ersi
  • [*][*][*][*][*]
Script questions
Reply #30
Code: [Select]
var addthis_config = {"data_track_addressbar":true};


Question #1: What is this code (found in a website) meant to do?

Question #2: Should it do it?

  • krake
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #31

  • ersi
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #32
Mkay. Reading some on that thing called History API.

MANIPULATING HISTORY FOR FUN & PROFIT
Quote
Why would you manually manipulate the browser location? After all, a simple link can navigate to a new URL; that's the way the web has worked for 20 years. And it will continue to work that way. This API doesn't try to subvert the web. Just the opposite.

What follows is unintelligible nonsense.

So, to answer my question #2, whatever that thing does, it should not do it.

Anyway, here's a demo to see if History API is available in your browser http://html5demos.com/history
In all browsers I used to load the URL with JS turned on, I got the text "HTML5 History API available", even in Opera 11.6*. Looks like one more reason to keep JS turned off.

  • Frenzie
  • [*][*][*][*][*]
  • Administrator
Re: Browser security paranoid privacy panic
Reply #33
What follows is unintelligible nonsense.

Actually the history API is quite nice because it allows JS to use pushState to stick in a regular URL while still using JS. When done right there should be very little difference between with JS and without JS, except that with JS should be slightly faster. Of course, it's seldom done right... :P

What the History API has to do with #whatever of ?something=whatever is their secret. I can only imagine it makes some particular use case the tiniest bit simpler to write. In this case they're suggesting not using links like this because they'd prefer to stick tracking data in there. Meh, bunch of weirdos.

  • ersi
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #34
Thanks for trying to take time to explain and analyse History API, but I honestly see no valid use case. Based on the website I found, there can only be evil-minded use cases, and I won't waste time trying to deconstruct them. Just avoid them.

  • Barulheira
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #35
We use Trello, which seems to be using that API. It's very nice to open tickets without reloading the whole page, while getting a valid URL that links to that ticket, which can be bookmarked or shared with other members of the team.

  • Frenzie
  • [*][*][*][*][*]
  • Administrator
Re: Browser security paranoid privacy panic
Reply #36
Yup, exactly. :up:

  • ersi
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #37
Wasn't "without reloading the whole page" option supposed to be handled by the way browsers manage cache? E.g. in Opera you can timeset how often the same pages/images get rechecked.

I still don't see the point with History API. And in particular I don't see why JS of webpages should read the address field. Isn't the referrer header enough?

Edit: As of today, I have begun to take a closer look what's going on in tcpdump. Pretty interesting.
  • Last Edit: 2015-03-09, 18:41:54 by ersi

  • Barulheira
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #38
When the whole page is a huge panel with lots of information about many tickets, and I just want to see details about a handful of them, with very little changes to the contents of the main panel, and with the option to go back in history and revisit some tickets I've seen last, then the History API is quite handy. Cache control (exclusively) wouldn't work so good.

  • Frenzie
  • [*][*][*][*][*]
  • Administrator
Re: Browser security paranoid privacy panic
Reply #39
Wasn't "without reloading the whole page" option supposed to be handled by the way browsers manage cache? E.g. in Opera you can timeset how often the same pages/images get rechecked.

The use case is the same as for frames. Except frames actually had the exact same issues with bookmarking that XMLHttpRequest originally did. And of course you might reload only a minuscule part of the page because you have a kind of fine-grained control that frames never did. Actually this forum does that as well if you edit your message in place or when you use preview, except that in our case the URL you're on never changes. It should be more efficient for every step of the chain: the server has less work generating things, the transport has to transmit less data, and your browser in turn has to redraw less. That last part only really works out when you don't load over 1 Megabyte in scripts like, say, Twitter. :P

I still don't see the point with History API. And in particular I don't see why JS of webpages should read the address field. Isn't the referrer header enough?

Web pages have been able to read and write the URL through window.location/document.location since forever. The History API enables adding a URL to the history without ever causing a reload. The thing is, the particular use case on the earlier article wouldn't cause a reload either because it uses #blabla. That's a simple matter of window.location = window.location + '#trackerstuff' and it wouldn't ever reload. That's why websites like Gmail still use annoying URLs like mail.google.com/mail/u/0/#inbox/14ba2476f4b7b0bd, although these days it would work just as well without the pound sign. As far as I can tell the History API tracker shtick is a bunch of hooey and all of this has been possible since IE4-ish.

Now window.location = window.location + '/someplace-else' -- that would cause a reload. Whereas with the History API you can simply switch out a part of the page and tell the browser you're now on '/someplace-else'.

  • ersi
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #40
At least we seem to agree that the tracker thing is pointless and probably evil. As to the rest of the functionality, reloading but only the necessary part as if in frames (if I got it right) and putting things into the browser history you may want to have there, sounds like you are talking about some Web2.0 sort of pages. They are inaccessible without JS, right? in which case they hardly justify their own existence.

As for tampering with the browser's history, this has always been a privacy concern. Seems like it's not the browser who is recording the history of visited pages, but websites reading from and writing into browser history. This would be okay only in a very limited sense - allow the current content of the address field be written into browser history. Nothing more. Evidently, websites have always been able to sniff out more.

And I think I see now what browser vendors meant when they were advertising the novel truncated URLs. This way the security junk like #-and-gibberish and other such stuff administered by webpages remains invisible to the users and everybody will be happy because the world is oh so safe.

  • Frenzie
  • [*][*][*][*][*]
  • Administrator
Re: Browser security paranoid privacy panic
Reply #41
They are inaccessible without JS, right?

I think Web 2.0 is just a stupid marketing term. However, that depends on what it is. A site like Twitter now actually works fine with and without JS, but in the pre-History API days you couldn't have URLs that worked both ways. Now I wouldn't call Twitter a good example overall -- it seems to be extremely heavy for what little it does -- but it definitely became a better site because of the History API.

but websites reading from and writing into browser history.

If websites can obtain information about history it's a security bug (like the visited link color issue in the past). The History API only allows a website to silently change its URL without adding a new history entry (I struggle to think of a use case for that...) and to silently change its URL while adding a new history entry.

  • ersi
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #42

I think Web 2.0 is just a stupid marketing term.

Yes, it's a marketing term, but not merely stupid. It poses dangers.


If websites can obtain information about history it's a security bug (like the visited link color issue in the past).

Precisely on topic.


The History API only allows a website to silently change its URL without adding a new history entry (I struggle to think of a use case for that...)

There is none. You can stop struggling now.


...and to silently change its URL while adding a new history entry.

And when there's no History API, websites cannot add a new history entry?

Let's try to flesh out an example that I can understand. Trello.com was mentioned. I am more familiar with the way Github looks (even though not familiar how its code interacts with browsers).

Let's say I'm staring at the list of Otter's issues and someone publishes a new issue at that very moment. Github pushes (by means of JS) the new issue into the browser display, kindly taking care that only the newly published issue emerges while none of the rest of the page gets refreshed.

Questions: How is this dependent on History API implementation in the browser? Doesn't it purely depend on JS as such? What does this have to do with the ability to write into browser history? How is this different from ordinary refresh? At ordinary refresh the URL does not change, obviously. With or without refreshing (JS on or off), my staring at a single page should not change the URL in the address field, should it?

Another case. Let's say I am staring at the comments thread of a specific issue in Github. The right-hand column and the top bar of the webpage are common no matter which issue I look at. Let's say I open up a different issue. The comments thread now has different content, while the right-hand column and the top bar are the same. If I understood rightly, the awesome History API kindly prevents the browser from re-downloading the right-hand column and the top bar while I switch to comments under a different issue.

Questions: Is there anything in this use case that is not meant to undermine the purpose of browsers' handling of cache? Browsers are supposed to handle cache this way (correct me if I'm wrong):

- Update (re-download) only the stuff that is not found in cache or that is expired.
- Don't update the stuff that is found in cache and that is not expired.
- The stuff is identified by URL's, e.g. when a specific image (with a specific URL) is displayed on multiple webpages, the image is drawn from cache, while the rest of the page is drawn over the web.
- Expiration is determined as per browser and website settings, where browser settings should have priority.

This is how it should work in ideal, right? (I know it doesn't in reality, but let's keep to the standards for now.) How is it supposedly better when something called History API should do this work in interaction with the website's JS? What are the improvements over browser cache? And again, what does this have to do with the ability to write into browser's history?

  • Barulheira
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #43
"Stuff" is identified by URLs. But there's another way to understand them: information is identified by URLs. If I've got some information from the web, then there can be a URL that identifies it. If it can be cached, the better. The next time I visit that URL, if it is in cache and valid, then there's no need to download it, otherwise it will be downloaded - at least the first time. In either case, it's possible to check in History that I've already seen that information (i. e. URL) before (no matter if those files are in cache, which is an internal browser issue).

  • ersi
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #44

"Stuff" is identified by URLs. But there's another way to understand them: information is identified by URLs. If I've got some information from the web, then there can be a URL that identifies it. If it can be cached, the better. The next time I visit that URL, if it is in cache and valid, then there's no need to download it, otherwise it will be downloaded - at least the first time. In either case, it's possible to check in History that I've already seen that information (i. e. URL) before (no matter if those files are in cache, which is an internal browser issue).

And is it the website that should check your history and cache to determine what content to push on you, or is it the browser that should check its own history and cache against the website and determine what to download?

  • Frenzie
  • [*][*][*][*][*]
  • Administrator
Re: Browser security paranoid privacy panic
Reply #45
Github pushes (by means of JS) the new issue into the browser display, kindly taking care that only the newly published issue emerges while none of the rest of the page gets refreshed.

That shouldn't have anything to do with the History API. At no point in time do you leave the page https://github.com/OtterBrowser/otter-browser/issues. But when you navigate away from that page to https://github.com/OtterBrowser/otter-browser/issues/731 -- that's when the History API comes into play. In both cases the site uses XMLHttpRequest to update part of the page, but it only makes sense to behave as if you loaded another page in the latter case in order to preserve linking, bookmarking & back/forward functionality.

Questions: Is there anything in this use case that is not meant to undermine the purpose of browsers' handling of cache? Browsers are supposed to handle cache this way (correct me if I'm wrong):

- Update (re-download) only the stuff that is not found in cache or that is expired.
- Don't update the stuff that is found in cache and that is not expired.
- The stuff is identified by URL's, e.g. when a specific image (with a specific URL) is displayed on multiple webpages, the image is drawn from cache, while the rest of the page is drawn over the web.
- Expiration is determined as per browser and website settings, where browser settings should have priority.

Dynamic pages (like this forum) are always changing, so their content will seldom be cached. Only static resources like CSS, JS, and images are effectively cached. On this website it doesn't matter much one way or the other, but on high-volume websites like Twitter or Github a slight reduction in CPU time and a more tangible reduction in transfer size can add up really quickly. Every time you load a page on this forum, you're loading at least 16 kB of header and footer stuff that might just as well have remained static. Simply put, the user will probably profit and the server will definitely profit. (Forgetting for a moment that the user will also benefit from a lesser server load.)

You're right to note that a dynamically generated https://github.com/OtterBrowser/otter-browser/issues/731 won't be cached unless it's on the first load. On the flipside, if you're browsing around Github the results of the XMLHttpRequests can be cached (I don't know if they are; it depends on what the HTTP headers say). So it's perfectly conceivable that the page https://github.com/OtterBrowser/otter-browser/issues/731 would be cached in two different ways: once as a full page and once as a partial page request.

  • Barulheira
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #46

And is it the website that should check your history and cache to determine what content to push on you, or is it the browser that should check its own history and cache against the website and determine what to download?

The browser. The website doesn't push anything, AFAIK.

  • krake
  • [*][*][*][*][*]
Re: Browser security paranoid privacy panic
Reply #47
For pushing there is a special protocol, thanks to Google. It's called SPDY.