We have recently been updating our IP Mapping Joomla component to handle HTML5 geolocation detection and thought this may be of interest to others.
IP Mapping was originally designed with the aim of displaying IP addresses upon Google Maps and experience has shown that although it works well it is very reliant upon the accuracy of the data held by the various database and communication suppliers. The various supplied of the IP to location mapping vary considerably in the accuracy of the location information. We ourselves have been ‘located’ as being several hundred miles away from where we were physically located, depending upon which IP-location provider we were using and when we were determining the location. Whilst this may be adequate for some, for others it is a little bit hit and miss. I am thinking here of a ‘local’ village or town intending to serve the local neighbourhood, who desire to know how widespread their visitors are.
Came across this small post about improving the speed of a web site which I thought might be of interest to some.
We have always displayed a few relevant newsfeeds upon our web site, but it has never, to be honest been very high on the visitor list, or upon our own priority list. We long ago noticed that the default display for the ‘Newsfeeds Categories’ in the front end of our site comprised merely of two lines, one for each of the two newsfeed categories we use, which also acted as a link to the underlying newsfeeds in the respective category. There was no page header, no display of the breadcrumb information, and no details of the category descriptions. In short a very barren page.
Originally it was suspected that it might be a template or CSS type problem. Attempts to change the menu settings, resaving module specifications etc., all proved fruitless. It didn’t matter what the menu settings were, they were silently ignored. One is tempted to say it was a cache type problem, but bearing in mind that this has been the situation for several months, if not longer and the various caches’ had been manually cleared several times during that period, it was obvious that something else was amiss.
With the change to a new site template, the situation remained unresolved and it was starting to get a little bit annoying. So given a hour or so spare we decided to investigate further. Inspection of the PHP code underlying the display revealed no clues, and despite retrying all our previous steps we were no further forward.
Searching the web for similar reported problems drew a complete blank, apart from a link to a very strange problem we had ourselves encountered with the display of breadcrumbs on a previous occasion. Looking back we found out previous blog entry. [So keeping a blog can prove useful.] We thus decided to clear all URL entries from the sh404SEF component for the com_newsfeeds component. Low and behold on refreshing our web page the correctly formatted page was shown, complete with headers, breadcrumbs, descriptive text etc.
We realise that sh404SEF keeps track of URL links, but why this should impact the page display is currently a bit of a mystery. It doesn’t itself cache pages, but must somehow also keep track of which modules and what the ‘previous’ settings were for a page ,for which it is keeping a record of the link. I am sure that I have never read anything of this sort in the component documentation.
What we learnt gain from this is that sh404SEF seems to have some strange characteristics which impact what is displayed upon a screen, far and above just converting non SEF URLs to a SEF format. So it you are ever seeing a similar type of problem and every thing else seems to failing to resolve it, it might, if your site is using sh404SEF be worth clearing your entries and seeing if it resolves the problem. Certainly stranger things have happened.
We are pleased to announce the revamp of our web site.
We have retaining all of our previous content which is now presented in a template designed by Joostrap making use of Bootstrap v3.
This redesign is intended to reflect some of the newer emerging technologies and also increased performance and a more streamlined design. It is fully responsive and mobile ready, using HTML5 as standard.
This is the first of a number of changes we will be making on the site which will be made over the next month or so. Our previous template has served us well for a while but all things have their time and it was time to move with the times.
Update 25/06/2014: Have also upgraded the version of Joomla to the latest release. Hopefully everything will remain stable.
The topic of the moment appears to be ‘Canvas Fingerprinting’ with a number of articles available on the web. It is the latest development in use for tracking the movement of users on the web. You do not need to click on a widget to be tracked, just visiting the site is sufficient. It exploits the subtle differences in the rendering of the same text to extract a consistent fingerprint that can easily be obtained in a fraction of a second without the user being made aware.
A research paper concluded that code used for canvas fingerprinting had been in use earlier this year on 5,000 or so popular websites, unknown to most of them. Most but not all the sites observed made use of a content-sharing widget from the company AddThis.
An invisible image is sent to the browser, which renders it and sends data back to the server. That data can then be used to create a “fingerprint” of the computer, which could be useful for identifying the computer and serving targeted advertisements.
But of several emerging tracking methods, canvas fingerprinting isn’t the greatest: it’s not terribly accurate, and can be blocked. The Electronic Frontier Foundation (EFF) recommend their own ‘Privacy Badger’ or the Disconnect add-on.
The list of sites that still track you is at this address.
So much for privacy.
We have noticed over the past few months an increase in the number of web access upon various URL addresses upon our site with a string starting ‘/RK=0/RS=’, followed by strings of other characters. To us they are obviously some attempt to get access to information but we were a little puzzled as to how they might possibly work. The URL’s they are attached to are varied but seem to be upon a lot of Blog addresses. The RS= looks like it could be a regular expression for a pattern match of sorts, since some(but not all) are sometimes followed by a caret ^ but that is speculative.
They look to be a form of SSI injection with the header, with the attempt to try and pass tokens into the URL for some purpose..
Apparently we are not alone and there is much discussion upon the web as to exactly what it is trying to achieve and who might be behind it, but no clear answer is currently known.
One way to remove them might be a simple .htaccess rule similar to the following:
RewriteRule ^(.*)RK=0/RS= /$1 [L,NC,R=301]
An alternative would be to block the IP addresses from which they are coming, but if they are not ‘hard addresses’ in the sense that they are not reusable, then the risk is that you may end up blocking legitimate traffic.
One question that is often asked is how one preserved ones’ digital assets and pass them on to your heirs. We recently read about a new service that may offer a solution.
Longaccess promises to be a cold storage of sorts for your digital life. It's a cloud-based service that operates off Amazon's S3 data centres, but unlike other file lockers such as Dropbox or Google Drive, Longaccess aims to be less accessible, but more dependable. It describes itself as a ‘safe’ on the Internet, a location where one can store files fully encrypted and secured, safe and ready to be accessed for decades.
Longaccess is not a file syncing service, nor is it a file sharing service. It is a service for storing files for long periods of time. Files that are NOT updated, or changed at all. Every time a file is created and uploaded to a Longaccess Archive using the desktop application, one gets an Archive Certificate. This is a simple text file, that contains all the information required to access the data in the future:
- Anyone with access to the Archive Certificate can access the corresponding Archive data: Nothing else is required, not even a username or password.
- Access to the Archive data is impossible without the corresponding Archive Certificate. No one, not even the owner, nor Longaccess, can decrypt the Archive without the Archive Certificate.
One can think of the Archive Certificate as a full entitlement to access the data of a specific Archive. If one gives a copy to someone else, they can also access the data.
There are a number of questions re cost etc. that immediately spring to mind, including how they can guarantee they will be around in a decade or so, question which they try to answer on their web site.
Sounds interesting and may well be a way to preserve those ‘old’ photographs for posterity. One that may well be worth watching for a future opportunity.
It has been observed for some time that some of our site visitors, usually of the less desirable types have been ‘presenting’ Private IP addresses, as reported by our site protection software.
An IP address is considered private if the IP number falls within one of the IP address ranges reserved for private uses by Internet standards groups. These private IP address ranges exist:10.0.0.0 through 10.255.255.255
169.254.0.0 through 169.254.255.255 (APIPA only)
172.16.0.0 through 172.31.255.255
192.168.0.0 through 192.168.255.255
Private IP addresses are typically used on local networks including home, school and business LANs including airports and hotels.
Devices with private IP addresses cannot (?) connect directly to the Internet. Likewise, computers outside the local network cannot connect directly to a device with a private IP. Instead, access to such devices must be brokered by a router or similar device that supports Network Address Translation (NAT). NAT hides the private IP numbers but can selectively transfer messages to these devices, affording a layer of security to the local network.
Standards groups created private IP addressing to prevent a shortage of public IP addresses available to Internet service providers and subscribers.
Despite the above, which is standard(?) Internet criteria, we have observed visitors using addresses in the 192.168 range for over a year. However since the beginning of the month (February 2014) we have seen a large number of addresses in the 172.16 range as well. Something has obviously changed as these should not be possible.
Searching on the web, has not revealed any other site that reported the problem? Whilst not an issue for ourselves, since we do not use the IP address information for any purpose other than providing an assessment of where our visitors original from, it might well pose a problem for other sites. It is suspected that the only ‘real’ way to stop the practise would be to block the IP ranges, such that a visitor using an IP address from outside the local network, that has a value within the ranges, being effectively ‘blocked’ from accessing any information upon a site, although this should not, according to the criteria above be required.