It is hard keeping up with new stuff. Especially so when you are a veteran whose head is already packed with old stuff that's not willing to go away. However, new stuff, especially in dynamic fields like IT and software, has its own way of sneaking up on you. New stuff that you are forced to deal with on a moment's notice can make you hazy and discombobulated (just learned this word the other day..)
My way of coping with this problem is to constantly keep in touch with new tech, new approaches, and new problems, so I do not face an insurmountable wall to climb whenever the reality of change hits me.
One of the best tools I have is my membership in Experts Exchange, and specifically the fact that I try to be on the giving side, an expert giving free advice. Each and every answer I provide hones my skills and teaches me something new. I learn even from questions that I have nothing to help with.
Give it a try.
Wednesday, March 14, 2012
Saturday, June 4, 2011
Things your IT can do in the cloud - #1: anonymous proxy
Why anonymize your business?
In many people's minds browsing via an anonymizing proxy is associated with either hard-core anarchists or with opponents to oppressive regimes.
However, there are several very good reasons to use anonymizing proxies in your business.
You may want to hide from your competitors the fact that you visit their web site and look for specific information to acquire competitive intelligence. You also want to hide these visits from third party web sites, which may cooperate with your competition more closely than they do with you.
When researching a new product, a firm searches various databases, vendor sites and academic sources for information on the relevant materials, processes, possible suppliers, and market information. When using subscription based information sources, the firm's confidentiality is protected contractually. That is not so in public sources. In many industries, knowing in advance that a certain firm is looking for information about certain processes and materials, is worth a lot of money.
Prior to all mergers or acquisitions, a team of analysts will research the firm to be merged with or acquired. Just like in regular R&D, an exposure of intentions to the researched subject or to a third party can be very costly.
Of course, there is no need to anonymize the entire traffic going out of your firm's network. The problem can be usually pinpointed to a few individuals who really need this protection, and even they do not need it all of the time.
Using EC2 to anonymize your users
The idea to use An Amazon EC2 server as an anonymous proxy is not a new one. There are a number of articles already on the web explaining the technicalities of setting up proxy servers and tunneling in EC2. However, the existing articles aim at highly technical individuals and usually the setup is much more complex than what I am about to show.
Technically speaking, there are several ways to achieve our goal. You can tunnel the traffic via SSH, setup a SOCKS server, or use anonymizing networks like TOR and I2P. I chose to showcase a simple http proxy using standard Apache, because it is so easy to set up and yet so effective.Using AWS CloudFormation, you can have a 1-Click proxy up and running in no time!!
I assume that the readers of this blog post already have an AWS EC2 account, along with their credentials, certificates and keys. You should also have some familiarity with the AWS self service portal. I will focus on what you need to specifically do to quickly and easily deploy an anonymizing apache proxy server. Scroll down to see the CloudFormation template used to automate the deployment.
Recipe materials:
- 1 EC2 security group
- 1 Elastic IP
- 1 EC2 Micro Linux server
- 1 Apache httpd
- A dash of configuration changes
Create an EC2 security group
We will use the EC2 security group to limit access to the proxy server. After all, we do not want free riders to use our proxy, especially since we will be paying for all of the traffic.
First, you have to know the outbound addresses of your network. In this example, I used a bogus address of 79.181.46.194. Create a new Security group, and add a rule like the one in this screenshot:
Start a new EC2 instance
For a small number of users, a Linux micro instance is more than enough. Start a new instance and select the basic 32bit Amazon Linux AMI, and then chose to start a micro size server.
The Amazon Linux servers have almost nothing preinstalled on them, but they do have the cloud-init service. The cloud-init service, developed by Canonical to be used on Ubuntu, allows you to pass to the server bootstrap configuration data, parameters and commands. Our instance will read the user data passed to it during initialization, and use it as the input for cloud-init.
Here are the actual contents to be used for the user data.
The "packages" section installs the latest Apache httpd service from Amazon's yum repository.
The "runcmd" section appends the minimal set of Apache configuration directives that are required for the Apache proxy, and restart apache. Port 443 is added to support browser configurations that use it for SSL proxying.
Before you copy and paste, take care to modify the IP address of your network. Although the EC2 Security group should take care of unwanted network access, I think that it is good practice to include some access control here as well.
#cloud-config
packages:
- httpd
runcmd:
- echo listen 443 >> /etc/httpd/conf/httpd.conf
- echo ProxyRequests On >> /etc/httpd/conf/httpd.conf
- echo ProxyVia Block >> /etc/httpd/conf/httpd.conf
- echo \<proxy \*\> >> /etc/httpd/conf/httpd.conf
- echo Order deny,allow >> /etc/httpd/conf/httpd.conf
- echo Deny from all >> /etc/httpd/conf/httpd.conf
- echo Allow from 79.181.46.194 >> /etc/httpd/conf/httpd.conf
- echo \<\/Proxy\> >> /etc/httpd/conf/httpd.conf
- service httpd restart
Last thing to do before you launch the instance, is to assign the previously created EC2 security group to our new micro instance.
Before we turn to configure the end users, we may want to associate an Elastic IP with our new instance. We want a predictable environment, where the configuration changes to end users and our IT infrastructure are minimal. A new server instance in EC2 gets unpredictable IP address, and when we use an Elastic IP we can keep a known IP address and even use a DNS record to point at our proxy. The allocation of an EIP to a running instance happens after the instance is up and running.
That's it – start the server, wait 3 minutes, and you have a private yet anonymous proxy.
How much does it cost?
If you are a new customer, you are entitled to a period of 12 months of free tier discount, reducing the costs significantly.Assuming that your business needs a proxy 50% of the time, and a monthly bandwidth usage of 50GB, the setup used in this post costs $26 a month, or $15 after discount.
If you plan to keep the proxy online 100% of the time, it will cost you a total of $30 a month, or $12 after discount.
The reason for this seemingly strange pricing is the cost of Elastic IP. It does not cost you to use it, but you pay when you keep it reserved without usage.
AWS CloudFormation Template
CloudFormation allows you to bundle all of the resources needed for an application launch into a single, automatic job. In our case, we have 3 resources: a security group, an elastic IP, and a server instance. The following template starts everything in one step.{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Instant Anonymizing proxy",
"Parameters" : {
"KeyName" : {
"Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instance",
"Type" : "String",
"Default" : "MySSHKeypair"
},
"MyNetwork" : {
"Description" : "Outbound IP Address of your corporate network",
"Type" : "String",
"Default" : "100.101.102.103"
},
"MyEIP" : {
"Description" : "Existing Elastic IP",
"Type" : "String",
"Default" : "50.51.52.53"
},
"InstanceType" : {
"Description" : "Type of EC2 instance to launch",
"Type" : "String",
"Default" : "t1.micro"
}
},
"Mappings" : {
"RegionMap" : {
"us-east-1" : {"AMI" : "ami-8c1fece5"},
"us-west-1" : {"AMI" : "ami-3bc9997e"},
"eu-west-1" : {"AMI" : "ami-47cefa33"},
"ap-southeast-1" : {"AMI" : "ami-6af08e38"},
"ap-northeast-1" : {"AMI" : "ami-300ca731"}
}
},
"Resources" : {
"Ec2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"KeyName" : { "Ref" : "KeyName" },
"InstanceType" : { "Ref" : "InstanceType" },
"SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ],
"ImageId" : { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "AMI" ]},
"Tags" : [
{
"Key" : "Name",
"Value" : "MyProxy"
}
],
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["",[
"#cloud-config","\n",
"\n",
"packages:","\n",
"- httpd","\n",
"\n",
"runcmd:","\n",
"- echo listen 443 >> /etc/httpd/conf/httpd.conf","\n",
"- echo ProxyRequests On >> /etc/httpd/conf/httpd.conf","\n",
"- echo ProxyVia Block >> /etc/httpd/conf/httpd.conf","\n",
"- echo \"<proxy *>\" >> /etc/httpd/conf/httpd.conf","\n",
"- echo Order deny,allow >> /etc/httpd/conf/httpd.conf","\n",
"- echo Deny from all >> /etc/httpd/conf/httpd.conf","\n",
"- echo Allow from " , { "Ref" : "MyNetwork" } , " >> /etc/httpd/conf/httpd.conf","\n",
"- echo \"</proxy>\" >> /etc/httpd/conf/httpd.conf","\n",
"- service httpd restart","\n" ]]}}
}
},
"InstanceSecurityGroup" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "All ports access from my corporate network",
"SecurityGroupIngress" : [ {
"IpProtocol" : "tcp",
"FromPort" : "0",
"ToPort" : "65535",
"CidrIp" : { "Fn::Join" : [ "/" , [ { "Ref" : "MyNetwork" } ,"32" ] ] }
} ]
}
},
"IPAssoc" : {
"Type" : "AWS::EC2::EIPAssociation",
"Properties" : {
"InstanceId" : { "Ref" : "Ec2Instance" } ,
"EIP" : { "Ref" : "MyEIP" }
}
}
},
"Outputs" : {
"ProxyIP" : {
"Description" : "The IP address for the newly created Proxy server",
"Value" : { "Ref" : "MyEIP" }
}
}
}
Advanced proxy setup
You may consider a more customized Apache configuration.
Optimizing your Apache installation for performance and security is a good idea. It means stripping out all unnecessary apache modules, keeping only the bare minimum for our required functionality, and modifying some apache directives for increased security.
Adding better access control with some kind of user authentication is a good idea. Basic authentication is very easy to setup and adds another layer of security to your proxy.
You should decide what to do with the server logs. If you keep them, then you need to define log a retention policy and configure Linux/Apache accordingly. But, maybe the best idea is to give up on logs altogether. After all, we wanted an anonymous proxy, didn't we?
Dealing with the road warriors is another matter. You will have to either limit their proxy access via the enterprise network, or change the entire security scheme by using a VPN or another encrypted channel between the proxy and the end users.
The downside to all of the advanced setup ideas is that you will most likely have to configure a private AMI and add more software to support your needs. A future post will deal with some advanced scenarios.
How to configure your users' browsers
This is beyond the scope of this article, but I added a few links nonetheless.
Here are the instructions for Firefox
http://support.mozilla.com/en-US/kb/Options%20window%20-%20Advanced%20panel?s=proxy&as=s#w_network-tab , here for Chrome
http://www.google.com/support/chrome/bin/answer.py?answer=96815
and here for Explorer http://support.microsoft.com/kb/135982
In a corporate environment, you should also take a look at Proxy Automatic Configuration scripts http://en.wikipedia.org/wiki/Proxy_auto-config
You can set up a code based PAC server, that can supply different PAC files to different users based on various criteria.
Caveats
Do remember that a proxy like the one we just discussed only hides the origin IP address. It does not hide any cookies the user accumulated in previous browser sessions, and it certainly does not hide you if you login to a subscription based service with your corporate email. I recommend using a browser dedicated only to sensitive operations. At least use a private session - all modern browsers support this feature. For example, if you use Chrome, pressing CTRL+Shift+n opens a new, incognito session.
One more thing: it turns out that Flash does not respect the browser's proxy settings.
Sunday, November 21, 2010
Azure management
In the last days of October, Microsoft announced that it is going to release remote desktop access to Azure instances.
That is very nice. However, despite the fact that you will be an admin on the instance and will be able to install and do anything, all changes will be ephemeral.
There is no way to save the modified image, and the next time it is restarted all changes will be gone.
This is worse than Google App Engine, who which gives your applications the illusion of a single hypercomputing instance. You don't have a VM for every role, at least not knowingly. By decoupling the application runtime from running VM instances, Google is ready to use whatever computing platform innovations that will happen. Azure, on the other hand, still gives you the responsibility to manage scalability by yourself.
This is also worse than EC2. In EC2 you have the option to bundle a running instance to a private image, so any modifications you make can be saved for future sessions.
The only good thing about this feature is the ability to debug your application, and I don't underestimate this new capability.
That is very nice. However, despite the fact that you will be an admin on the instance and will be able to install and do anything, all changes will be ephemeral.
There is no way to save the modified image, and the next time it is restarted all changes will be gone.
This is worse than Google App Engine, who which gives your applications the illusion of a single hypercomputing instance. You don't have a VM for every role, at least not knowingly. By decoupling the application runtime from running VM instances, Google is ready to use whatever computing platform innovations that will happen. Azure, on the other hand, still gives you the responsibility to manage scalability by yourself.
This is also worse than EC2. In EC2 you have the option to bundle a running instance to a private image, so any modifications you make can be saved for future sessions.
The only good thing about this feature is the ability to debug your application, and I don't underestimate this new capability.
Monday, October 18, 2010
Red vs Blue
A couple of days ago, I had the privilege to listen to a webcast by Hasan Rizvi, Senior Vice President for Oracle Fusion Middleware and Java Product.
Hasan spoke about the new Oracle Exalogic product, and about how this new product is going to revolutionize the datacenter and provide cloud computing capabilities.
Unfortunately Mr. Rizvi's talk was not about cloud computing, public or private. Neither it was about datacenters. All I could see was a sniper rifle targeting IBM's mainframe business.
Oracle and IBM have a long lasting relationship, framed by both intense rivalry and business cooperation. Although they competed in the database market, an Oracle database running on AIX was a favorite enterprise configuration. This stable state of affairs was not heavily disturbed when Oracle started foraging into the enterprise integration business. It's SOA family of products, running off the Oracle Application Server, were mostly used by heavily committed Oracle shops, and were no match for the stability, configurability and performance offered by IBM's WebSphere brand and the other leaders of the pack.
All of this changed since 2007, with the acquisitions of BEA and Sun.
BEA's dowry included the WebLogic server, vastly superior to Oracle's OAS, and the AquaLogic suite, which was a better line than Oracle's own. Both lines offer real competition to the WebSphere brand. BEA's legacy also included Jrockit and Tuxedo.
Tuxedo deserves a special consideration: It is used to offload applications from IBM's mainframe into distributed systems, and combined with Oracle's aggressive marketing organization it may now pose a real threat to IBM mainframe business.
However, not all mainframe shops run CICS. Many modern mainframe workloads are today based on Java, specifically on the WebSphere Application Server for the z/os. Until now, there was no real substitute for the benefits of MIPS, integration and security of WebSphere based Java on z/os.
And then came Sun... With the merging of Sun, Oracle now has full control over Java, and a private line of servers and storage. This doesn't spell like good news for IBM, that is heavily invested in Java based technologies, and is a major storage vendor.
It all culminated in the aforementioned Exalogic webcast. All of the comparisons, benchmarks and jargon used, were aimed at either current IBM mainframe users or mainframe wannabes who want to have the power of a mainframe on distributed systems. Even the price/performance table on slide 21 compares Exalogic with IBM's Power 795 server. Who else is likely to dole out $1,075,000 for a piece of hardware?
And what did IBM do? It ditched its support of the unassociated Java development in the Apache Harmony project, and went into the Java bed with Oracle.
See the recorded webcast and the slides here:
http://w.on24.com/r.htm?e=244865&s=1&k=7C946C00C82CA0F93EA4E95A5A6BA196
To download only the slides use this link
http://event.on24.com/event/24/48/65/rt/1/documents/slidepdf/webcastexalogic1012b.pdf
IBM gives up on Apache backed Java and joins Oracle
http://www.sutor.com/c/2010/10/ibm-joins-the-openjdk-community/
Oracle sues Google over Java standards
http://news.cnet.com/8301-30684_3-20013546-265.html
Hasan spoke about the new Oracle Exalogic product, and about how this new product is going to revolutionize the datacenter and provide cloud computing capabilities.
Unfortunately Mr. Rizvi's talk was not about cloud computing, public or private. Neither it was about datacenters. All I could see was a sniper rifle targeting IBM's mainframe business.
Oracle and IBM have a long lasting relationship, framed by both intense rivalry and business cooperation. Although they competed in the database market, an Oracle database running on AIX was a favorite enterprise configuration. This stable state of affairs was not heavily disturbed when Oracle started foraging into the enterprise integration business. It's SOA family of products, running off the Oracle Application Server, were mostly used by heavily committed Oracle shops, and were no match for the stability, configurability and performance offered by IBM's WebSphere brand and the other leaders of the pack.
All of this changed since 2007, with the acquisitions of BEA and Sun.
BEA's dowry included the WebLogic server, vastly superior to Oracle's OAS, and the AquaLogic suite, which was a better line than Oracle's own. Both lines offer real competition to the WebSphere brand. BEA's legacy also included Jrockit and Tuxedo.
Tuxedo deserves a special consideration: It is used to offload applications from IBM's mainframe into distributed systems, and combined with Oracle's aggressive marketing organization it may now pose a real threat to IBM mainframe business.
However, not all mainframe shops run CICS. Many modern mainframe workloads are today based on Java, specifically on the WebSphere Application Server for the z/os. Until now, there was no real substitute for the benefits of MIPS, integration and security of WebSphere based Java on z/os.
And then came Sun... With the merging of Sun, Oracle now has full control over Java, and a private line of servers and storage. This doesn't spell like good news for IBM, that is heavily invested in Java based technologies, and is a major storage vendor.
It all culminated in the aforementioned Exalogic webcast. All of the comparisons, benchmarks and jargon used, were aimed at either current IBM mainframe users or mainframe wannabes who want to have the power of a mainframe on distributed systems. Even the price/performance table on slide 21 compares Exalogic with IBM's Power 795 server. Who else is likely to dole out $1,075,000 for a piece of hardware?
And what did IBM do? It ditched its support of the unassociated Java development in the Apache Harmony project, and went into the Java bed with Oracle.
See the recorded webcast and the slides here:
http://w.on24.com/r.htm?e=244865&s=1&k=7C946C00C82CA0F93EA4E95A5A6BA196
To download only the slides use this link
http://event.on24.com/event/24/48/65/rt/1/documents/slidepdf/webcastexalogic1012b.pdf
IBM gives up on Apache backed Java and joins Oracle
http://www.sutor.com/c/2010/10/ibm-joins-the-openjdk-community/
Oracle sues Google over Java standards
http://news.cnet.com/8301-30684_3-20013546-265.html
Friday, October 1, 2010
Cloud Security Architecture Matters
Not all cloud services are created equal. Clouds are architected by flesh and blood men and women, and because the are no cloud standards yet, the architectural choices are invariably different.
A couple of months ago I was looking for differences between various cloud vendors' API implementations, to show at a local OWASP meeting.
And differences I found. The most important difference is in the way users present their credentials to the service, and how the service ensures that it receives a valid request.
To make things comparable, let's look at the way 3 leading cloud vendors (Amazon Web Services, GoGrid and RackSpace) authenticate and authorize usage of their cloud resources.
We care about the way the service ensures that it receives a valid request, because we want to minimize the risk of account hijacking, and to minimize the risk of action replay.
A hijacked account means that a third party does unintended actions using resources that are tagged as belonging to the account owner. The third party can use the account for unwanted activity, such as distribution of illegal materials or promotion of spam, and the bill is handed out to the account owner.
Action replay means that someone may repeatedly send identical commands to the cloud infrastructure on behalf of the account owner. At best it is just a nuisance that may cost a little.
At worst, it may enable a third party to replace valid content with invalid, or even malicious content.
I am going to compare the basic action of listing the contents of the cloud objects container.
To make long things short, here is a brief description of the results.
A couple of months ago I was looking for differences between various cloud vendors' API implementations, to show at a local OWASP meeting.
And differences I found. The most important difference is in the way users present their credentials to the service, and how the service ensures that it receives a valid request.
To make things comparable, let's look at the way 3 leading cloud vendors (Amazon Web Services, GoGrid and RackSpace) authenticate and authorize usage of their cloud resources.
We care about the way the service ensures that it receives a valid request, because we want to minimize the risk of account hijacking, and to minimize the risk of action replay.
A hijacked account means that a third party does unintended actions using resources that are tagged as belonging to the account owner. The third party can use the account for unwanted activity, such as distribution of illegal materials or promotion of spam, and the bill is handed out to the account owner.
Action replay means that someone may repeatedly send identical commands to the cloud infrastructure on behalf of the account owner. At best it is just a nuisance that may cost a little.
At worst, it may enable a third party to replace valid content with invalid, or even malicious content.
I am going to compare the basic action of listing the contents of the cloud objects container.
To make long things short, here is a brief description of the results.
- All APIs make use of the HTTP protocol, with some custom extended headers.
- All APIs require a public access identifier and a secret key shared between the account owner and the vendor.
- AWS does not need the secret key in plaintext. It requires a unique signature for each request, making it extremely difficult to do any harm to the account, even if the request is somehow intercepted on the way.
- GoGrid does not need the secret key in plaintext. It requires a signature as part of an authentication request, and returns a security token that is valid for a period of 10 minutes, and for all actions within 10 minutes.
- RackSpace requires the secret key in plaintext as part of an authentication request, and returns a security token that is valid for a period of 24 hours, and for all actions within 24 hours.
Sunday, September 13, 2009
How to get AWStats to show Intranet location stats
I was asked by Teva's HR department to do something about the HR intranet "Portal". The requirement is to know how the HR intranet is used, based on the 3 W rules: What, When, and Where. Teva has multiple offices in Israel, and HR went through a major reorganization and restructuring of its services and methodology during the past years.
It is good they came to me first, as some of my colleagues would have made it into a multi-million dollars data warehouse analysis project :)
Having good experience with Awstats, I decided to use this tool to analyze the HR intranet logs.
AWstats provides good enough stats about the What and the When, but I hit a blank wall regarding the Where. There are several plugins that enable geo-ip analysis, but according to the documentation, all of them are useless for intranet only log files.
The plugins from MaxMind use a proprietary format, and do not include my 10.* network anyway.
This left me with the IPFree database and plugin as the only viable option to add branch awareness into the intranet stats.
Geo-IPfree can be found on CPAN. First thing is to download it.
We need a couple of tools that may or may not be included in the IPfree package. txt2ipct.pl and ipct2txt.pl are required for us to make the modifications to the IPfree database.
First, extract the IP database from ipscountry.dat into an editable text file.
Open the ips-ascii.txt file with your favorite text editor and find your LAN/WAN IP range.
You have a line that looks like this
So, replace your ZZ line with these five
We have to modify the lib/domains.pm file to recognize the new Z1, Z2 and Z3 domains. Just add them to the end of the list, and keep the new domain names in lower case. The last line of the domains.pm file will now look like this
Have fun!!
P.S.
Maybe I will initiate the multi-million dollar web data warehouse project after all ...
It is good they came to me first, as some of my colleagues would have made it into a multi-million dollars data warehouse analysis project :)
Having good experience with Awstats, I decided to use this tool to analyze the HR intranet logs.
AWstats provides good enough stats about the What and the When, but I hit a blank wall regarding the Where. There are several plugins that enable geo-ip analysis, but according to the documentation, all of them are useless for intranet only log files.
The plugins from MaxMind use a proprietary format, and do not include my 10.* network anyway.
This left me with the IPFree database and plugin as the only viable option to add branch awareness into the intranet stats.
Geo-IPfree can be found on CPAN. First thing is to download it.
We need a couple of tools that may or may not be included in the IPfree package. txt2ipct.pl and ipct2txt.pl are required for us to make the modifications to the IPfree database.
First, extract the IP database from ipscountry.dat into an editable text file.
perl ipct2txt.pl ./ipscountry.dat ./ips-ascii.txtLet's suppose that you have 3 branches: London, Rome and Tel-Aviv. Your WAN network segments are respectively 10.1.*, 10.2.*, 10.3.*, and must be mapped into the IPfree database.
Open the ips-ascii.txt file with your favorite text editor and find your LAN/WAN IP range.
You have a line that looks like this
ZZ: 10.0.0.0 10.255.255.255We will map your locations into codes Z1, Z2 and Z3, because there are no such ISO country codes.
So, replace your ZZ line with these five
ZZ: 10.0.0.0 10.0.255.255Just for safety, rename your current ipscountry.dat file, and execute
Z1: 10.1.0.0 10.1.255.255
Z2: 10.2.0.0 10.2.255.255
Z3: 10.3.0.0 10.3.255.255
ZZ: 10.4.0.0 10.4.255.255
perl txt2ipct.pl ./ips-ascii.txt ./ipscountry.datNow, create a new Geo folder under the plugins folder in the awstats installation, and copy the IPfree.pm and ipscountry.dat files into the new folder.
We have to modify the lib/domains.pm file to recognize the new Z1, Z2 and Z3 domains. Just add them to the end of the list, and keep the new domain names in lower case. The last line of the domains.pm file will now look like this
'zm','Zambia','zr','Zaire','zw','Zimbabwe', 'z1', 'London', 'z2', 'Rome', 'z3', 'Tel Aviv'The last thing to do is to turn on the geoipfree plugin in the awstats configuration file.
Have fun!!
P.S.
Maybe I will initiate the multi-million dollar web data warehouse project after all ...
Wednesday, September 2, 2009
Restoring my mobile phone backup to a new phone
My Sony Ericsson K610 mobile phone dropped dead.
The service guy removed my SIM and memory card from the fresh corpse, and stared me in the eye: "Do you have anything stored on the phone, sir? There are no phone numbers stored on the SIM"
I remembered storing everything on the phone instead of on the SIM.
I also remembered using Sony's PC Suite to backup the phone contents a couple of days ago.
"Don't worry, got backup" - I told the service guy.
Problem
The Sony Ericsson PC Suite recognized the new phone easily. I selected Tools/Backup&Recovery and started to shiver.
No backup was listed, and my sync to Outlook was 4 months old.
Research
First, I determined that I must find the old backup. I created a new backup for my new phone, called it "backup2" and learned that
the phone backups are stored with extension .dbk inside "My Documents", in folder
"Sony Ericsson\Sony Ericsson PC Suite\Phone backup".
As if by magic, my old backup file also turned out in this folder.
I realized that the file contents must be tagged with the phone information.
Opening the backup file with a text editor didn't help. It loooked almost completely binary.
However, there was a slight resemblance to something I saw before...
I created a copy of the dbk file, and renamed it with a zip suffix. That did the trick.
Solution
The key is in the phoneID.txt file.
Version=3.0
DeviceManufacturer=Sony Ericsson
DeviceModel=K610
HeartBeat=HB1-06
IMEI=3546xxxxxxxxxx
PhoneName=My K610
I replaced the file with the one found in the dummy backup of the new phone
Version=3.0
DeviceManufacturer=Sony Ericsson
DeviceModel=W595
HeartBeat=HB1-07
IMEI=3529yyyyyyyyyyyy
PhoneName=My W595
Changed the zip file extension back to dbk. Now Sony Ericsson PC Suite showed the old backups and I was able to restore all of my important information to the new phone.
Subscribe to:
Posts (Atom)