Technology

This Person Does Not Exist

I’ve been very interested in current advances in Artificial Intelligence (AI) for several years now, so it is not quite often that a new website takes me by surprise. This week, one site managed to do just that: each time you hit refresh on https://thispersondoesnotexist.com/ the site creates a shockingly realistic — but totally fake —picture of a person’s face using a generative adversarial networks (GAN) algorithm.

You can even embed those images in your own web pages:

The site was created by Uber engineer Phillip Wang as a clever demonstration. The underlying code behind this site is called StyleGAN and was created by Nvidia (NVDA) and featured in a research paper.

This GAN type of neural network has huge potential for video gaming technology - hence Nvidia’s interest and research - but could also be used to create more realistic Deepfakes (AI-generated images or videos that can be used to push fake news narratives or other hoaxes).

Reboot

It’s been a few years that this personal blog has been mostly dormant, as I have mainly published content on the Altova Blog, but 2019 seems to be as good a year as any to hit Ctrl-Alt-Del on my XML Aficionado blog and again comment on industry trends, new developments, new technologies, and changes that impact our society in general. As has been the case in the past, there might be a few posts about the Red Sox or Patriots, too…

ctrlaltdelete.jpg

I have also rebooted the layout and design of the blog, and it has been moved from Blogger to Squarespace, upgraded to SSL, and now features a mobile-friendly responsive design. Hope you like it…

All your base are belong to us

Seeing young people today taking technology for granted that was quite literally the stuff of SciFi stories during my childhood makes me wonder how we're going to get to the next level, if fewer and fewer people get into engineering and science careers now than in the past 50+ years.

Aybabtu
Consider for a moment how much computers and their processing power they possess have advanced over the past 24 years: when I started this business in 1992, we were playing video games like Zero Wing ("CATS: All your base are belong to us", released in 1991) and Myst in 1993.

Now we're immersing ourselves in a virtual world like Destiny in 2014 and The Order: 1886 in 2015, and are on the brink of even more immersive experiences with VR goggles such as Oculus Rift and Microsoft HoloLens on the horizon. Yet if you consider the advances from Zero Wing to Destiny you're still only looking at about ⅔ of the progress that I've personally witnessed since I became interested in computers at age 12...


Back then we had a TRS-80 in my middle school and a friend's dad owned a Commodore PET. Later, during my high school career, we had Commodore 8032s to work with, and at my dad's laboratory at the university I had a chance to work with an Apple II (actually, to be more precise, it was a French Apple II clone). It wasn't until my Junior year that I was able to afford my first very own computer, an Apple IIe, and later one of the first IBM PC-XTs and then one of the first Macs during college.

I started programming early on and wrote software for a variety of local small businesses, which allowed me to be an early adopter and buy some pretty neat computers at that time. All of these machines had - by today's standards - extremely slow CPUs and laughably small amounts of RAM (and all, except the PC-XT, didn't even have hard disks!).

In fact, your typical smartphone today has more computing power, memory, disk space, etc. than all of NASA had in their "supercomputers" when they placed a man on the moon.

So why is it then, that we see so few young people interested in anything more than just playing games on their computers, consoles, and phones? Why do we need efforts like code.org to try to encourage more students to explore programming and computer science? Why is the age old question of "how do I program this darn thing" not burning in the minds of more young people?

All I can imagine is that there is, perhaps, a significant difference between then and now due to increasing complexity? Back in the early days, it was maybe a bit easier to be fascinated by computers and to be sucked into wanting to program them, because it was still possible to completely comprehend how a computer worked. Within just a few weeks you could teach yourself a programming language and create your first program. And you could create something cool in just a few months. By contrast, nowadays, to create something "cool" you need almost a movie studio budget and a team of programmers working for several years.

However, the barrier to entry was much higher back then in economic terms: you had to use a computer in a lab, at the school, or in college. Very few people could afford their own computer. By comparison, with a budget of < $80 you can build your own Raspberry Pi today and hook it up to an old monitor and off you go. You get all the programming tools in the world and a platform that is open and invites you to experiment not only with the software, but also with the hardware!

So why are young people today more inclined to play video games (be it on their smartphones, on PCs, or on consoles) than to want to program computers? And is code.org the right approach to get more people interested in computer sciences?

Let's discuss...

Flight tracking via ADS-B on a Raspberry Pi

Here's a fun little project you can build that is at the cross-roads of computing and radio communications: a flight tracker using SDR (Software-defined radio) to receive ADS-B transmissions directly from aircraft flying overhead using a Raspberry Pi with a DVB-T USB stick. Once you've built the system, you can direct your web browser to a port on the Raspberry Pi to take a look at all the airplanes near your location - not a lot going on above the White Mountains this Sunday afternoon:

Screenshot 2014 12 28 14 19 05

I've recently built three of these and linked them all to the FlightAware tracking website, where you can see the flights currently tracked by all three receivers, as well as tracking statistics regarding number of flights seen per day, etc. Here is the complete system with the three cables being power, Ethernet, and the antenna connection:

20141228 151553 1

The basis of this tracking is the Automatic dependent surveillance – broadcast (ADS–B) that each airplane transmits on a frequency of 1090 MHz, which contains GPS position, speed, altitude, heading, ascent/descent, and other navigational data. This information is normally used by ATC (Air Traffic Control) as well as received by other airplanes to provide situational awareness and allow self separation i.e. collision avoidance. Since this is being broadcast in a standardized format, it can be received and decoded by anybody, including ground stations.

Which brings us to the cheapest and most interesting way to receive these signals: SDR, or Software-defined radio - a technology where components that have been typically implemented in hardware (e.g. mixers, filters, amplifiers, modulators/demodulators, detectors, etc.) are instead implemented by means of software in a computer. All you need is an antenna and a UBS device that can support SDR applications, such as a cheap DVB-T USB stick.

For the computer system we don't need much processing power, so the Raspberry Pi Model B+ is the perfect choice for a low-priced stand-alone system that can easily handle the decoding of the ADS-B signal using the open-source dump1090 software and stream the data to a tracking site, such a FlightAware.

FlightAware also has some good instructions on how to build the system as well as a shopping list of all the components you will need: Build a PiAware ADS-B Receiver. Overall, the complete system, including case, power supply, Ethernet cable, etc. will cost you about $105.

However, the tiny antenna that comes standard with the DVB-T stick is only good for reception of signals from a very limited range. So one of the components you might want to upgrade sooner or later is the antenna by getting one that is actually appropriate for 1090 MHz, such as this vertical ADS-B outdoor base station antenna, or this ADS-B blade indoor antenna. I opted for the indoor antenna, since I didn't want to run extra antenna cables to the roof. And the indoor antenna is already so much better than the original tiny DVB-T whip.

Screenshot 2014 11 19 10 44 58

As you can see in this diagram, upgrading the antenna on November 12th resulted in the system being able to receive about 40,000 - 50,000 positions per day instead of 13,000 - 14,000 positions with the tiny original antenna - the correct antenna really makes a huge difference in the capability and range of the system!

Overall this is a fun little project to do on a rainy weekend. You can either build it all by yourself, or use it as an opportunity to teach the kids how to build a computer. Some Linux and networking skills are required, but nothing too complicated. And there are good instructions available for each step of the process.

Extracting useful data from HTML pages with XQuery

When building in-house solutions or mobile enterprise applications, you are often faced with having to deal with legacy systems and data. In some ancient systems, the data might only be available as CSV files, in other cases it might be arcane fixed-length text reporting formats, but if the legacy system is less than 20 years old, chances are pretty good that someone built and HTML front-end and so the data is available through a browser interface that renders it in some poorly formatted HTML code that loosely follows the standard. And very likely you will find the data intermixed with formatting and other information, so extracting the useful data is usually not as easy as it sounds.

In addition, when you are building mobile solutions, you may sometimes need some government data that is not yet available in XML or another structured format, so you again are faced with having to extract that information from HTML pages.

Common approaches to extracting data from HTML pages, such as screen-scraping and tagging are cumbersome to implement and very susceptible to changes in the underlying HTML.

In this video demo I want to show you a better way of extracting useful and reusable data from HTML pages. In less than 15 minutes we will build a mobile solution that - as an example - takes Consumer Price Index data from the US Bureau of Labor Statistics, parses and normalizes the HTML page, and then uses an XQuery expression to build nicely structured XML data from the HTML table that can then be reused to build a CPI chart. I will walk you through the creation of the XQuery expression step-by-step so that you can easily apply this method to similar problems of HTML data extraction:



As you can see in the above video, it was fairly easy to create nicely structured XML data from a table in the HTML page and to create a first simple chart that plots the CPI data over time.

But the true power of this approach is that you have much more flexible charting capabilities in MobileTogether and the XML data is now reusable, so you can calculate annual inflation rates directly from the underlying CPI data and plot it as well.

In this next video demo I want to show you just how to do that in less than 10 minutes. We will add a year-range selector to our chart where we can define which years to plot, and we will add an overlay chart that derives the annual inflation rate based on the underlying CPI data using XPath calculations and the plots that data:



Using this technique, you can not only extract data from singular HTML pages, but easily build a modern mobile front-end experience for many legacy systems that just offer an HTML-based browser interface at present. This will enable you to make your workforce a lot more productive and efficient, as they can now use a friendly mobile app experience to access your system rather than having to deal with a couple of HTML pages and forms in a browser on their tiny smartphone screens.

moto 360 Review

A while ago I wrote about my somewhat disappointing experience with the original Galaxy Gear, Fitbit, and Google Glass in an article "The (Broken) Promise of Wearables". It seems that I may have to revise my opinion a bit based on the new moto 360 smart watch:


First of all, I will admit that I'm a huge watch aficionado and have a collection of several beautiful mechanical timepieces and complications, as well as functional sport watches. So the Galaxy Gear  just hurts from a design perspective - both in its original form as well as the Gear 2 and the new Gear S. I have also been less than impressed by the new Apple Watch. Despite all the claims by others that it is beautiful, in my eyes it has the same flaw as the Galaxy Gear: the watch is square. Most display screens are rectangular, so they just built a watch around a square or rectangular screen.

However, there is a reason that the majority of watches have evolved with a circular dial. It is the most comfortable to wear, because it doesn't limit the movement of your wrist. And it has a timeless elegance to it.

So I was actually quite excited to receive my moto 360 this week and give it a spin. It has all the cool features we're obviously expecting from a smart watch nowadays: step counter, heart-rate monitor, Bluetooth connection to your cell phone, showing notifications from your phone on your wrist, navigation, voice commands, etc.

Compared to the original Galaxy Gear I tested a year ago, however, the notifications are actually meaningful on the watch now. You get a preview of important emails, text messages, WhatsApp, FB Messenger, and any other app that properly uses the notification API in Android.



If you want to see more of the notification, you just swipe up and get to see the whole message:



Ah, much clearer now. The question was about a college course, not a reptilian issue. The new Android Wear platform also allows you to tap such a notification on your watch, and the corresponding app on your phone gets launched. So if you need to respond to an email, just tap your watch as you take your phone out of your pocket, and you're right where you need to be.

If you need to do a quick Google search, you can now do that from your wrist with voice input and also see the top three results right on your wrist - or tap them to open them on your phone, if you need more details. For example, a search for "XML Editor" produced this:



On the hardware side, the moto 360 gets a lot right, that other smart watches got wrong. The charging cradle provides wireless charging and turns the watch into a nice bedside table alarm clock. There is only one button on the watch, and it is exactly where the crown used to be on mechanical watches. All other user interaction is done via the touch screen with intuitive swipe operations. The wrist strap is available in leather now and a metal version is coming later this year. The watch body is stainless steel with the glass surface being Corning Gorilla glass.

Last, but not least, you obviously get a choice of 6 different built-in watch faces, including a nice retrograde display, and you can download and install additional watch faces from the app store - some of which are nicely customizable.


What I need now is for some clever app developer to create a really beautiful watch face that includes some of the classic complications: equation of time, moon phase, sunrise and sunset times, sidereal time, etc.

Building a stand-alone mobile solution with MobileTogether

In a recent blog post I introduced our new MobileTogether platform for building mobile in-house solutions. Today I would like to give you a little demonstration of how easy it is to build a mobile solution with MobileTogether Designer.

As an example, we're going to build a simple tip calculator app for your next restaurant visit. Since this particular solution doesn't need any back-end data, we're going to create it as a stand-alone mobile solution so that it can be used even without a server connection once it is deployed.



As you can see, it just took about 8 minutes to build this app. MobileTogether lets you focus on what is really important, and handles everything else for you. If you want to try for yourself, you can download MobileTogether Designer here.

You can also watch more MobileTogether Designer video demos here.

Pixelstick

In December last year I contributed to the Kickstarter campaign for Pixelstick - an interesting new photographic tool for light painting. When our Pixelstick arrived in early September, it was immediately clear that it would need to go to New York with my son when he went back to college.

Calvin is a photography student at NYU's Tisch School and so I knew he would put the device to some creative use... and indeed he just posted a "How to" video on YouTube:



For more information on Pixelstick go to www.thepixelstick.com

For more updates on Calvin's work, follow him on Twitter @EpicFalkon or check out his website discover.calvinphoto.pro

XQuery Update Facility in XMLSpy

A really cool new feature in XMLSpy 2015 is the interactive XQuery Update Facility support, which lets you make changes to XML instance documents in a programmatic way - using XQuery statements - that exceed the typical regular expression based Find/Replace capabilities by far. The XQuery Update Facility specification provides a mechanism to insert nodes, delete notes, and modify nodes within instance documents. In XMLSpy 2015 you can now apply these updates either to the current file, all open files, all files in a project, or to entire directories.

This video explains the most important XQuery Update Facility commands and demonstrates how easy it is to put the power of XQuery Update Facility to work for you:



The new XQuery Update Facility support is one of the many new features introduced in the new Altova version 2015 product line last week, which includes new versions of XMLSpy, MapForce, all the other MissionKit tools, and all Altova server products.

Introducing MobileTogether - Build mobile in-house solutions faster

It's about time that I start talking about our next major product here: Altova MobileTogether is an exciting new cross-platform mobile environment that lets you build in-house mobile solutions for your workforce much faster and more productively than any other mobile cross-platform method out there. You can use MobileTogether to bring your in-house data — be it in SQL databases, XML, available as web services, etc. — to your employees on the device of their choice, be it business intelligence dashboards, elegant enterprise forms, or any other business processes from graphs for sales analytics to monitoring of business-critical data.


There are, of course, many ways to develop mobile solutions, and for external customer-facing apps the native platform approach or other multi-platform SDKs may make sense. But for in-house solutions the math just doesn't work. You need to be able to build these in a few days rather than weeks or months in order to stay on budget.


MobileTogether makes this rapid development possible by using a unique system architecture that consists of the following three components:


MobileTogether Designer

MobileTogether Designer is the IDE where you build your mobile solutions. It comes with full database-support for all major database servers as well as the ability to connect to any XML files, web services, HTML pages, or other data sources directly. If that's not enough, you can connect to FlowForce and MapForce server as an interim data transformation platform to get your data from EDI or other formats into XML easily.


Once you've defined your data sources, you then drag & drop UI controls onto your design surface and connect them with the data model. You can build powerful program logic using visual ActionTrees and for data manipulation the full power of XPath and XQuery is at your disposal.

This way you can build a powerful mobile solution in just a few hours and can test it right inside MobileTogether Designer using the built-in simulator that supports the iOS, Android, Windows Phone 8, and Windows 8 look&feel as well as many screen sizes and device options.

We have put together a few video demonstrations that show how easy it is to build a mobile solution with MobileTogether Designer.

MobileTogether Mobile App

Once you are satisfied with the way your mobile solution looks, it is time to get it onto your mobile device. The MobileTogether Mobile App is what runs your solutions on your device, and it is available for free form the respective app stores. The Mobile App is available for iOS, Android, Windows Phone 8, and even desktop Windows 8, so you can deploy your solutions to all mobile workers, no matter if they prefer a smartphone, tablet, or laptop!


Normally the MobileTogether Mobile App connects to a MobileTogether Server (see below), but for an initial trial run, you can simply connect the MobileTogether Mobile App directly to your MobileTogether Designer. Instead of starting the simulator in the designer, you select "Trial Run on Client" from the toolbar, and then you can see your mobile solution on your device and test it there, provided your mobile device is on the same local network as the computer where you are running MobileTogether Designer.

MobileTogether Server

Once you're ready to deploy your solution to your entire workforce, it is time to install and configure MobileTogether Server. This server acts as a conduit between your mobile clients and your database servers and other data sources in your IT infrastructure.


If you only want your mobile solutions to be available while the mobile devices are connected to your corporate Wi-Fi network, then it is sufficient to install the server in-house and your employees can immediately run your mobile solutions, once you've deployed them from the Designer to the Server.

If you want your employees to also be able to access your solutions while you are on the road, you will need to designate and open a port in your firewall so that the client devices can reach your server from the public Internet when they are traveling. We recommend installing an SSL certificate for that purpose so that the data connection between the clients and your MobileTogether Server is encrypted. In addition, we recommend securing the MobileTogether server with user authentication. You can choose between built-in user management, or the MobileTogether server can talk to your Active Directory server to integrate with your enterprise user management.

Alternatively, you can also install MobileTogether Server into a private cloud with a cloud provider of your choice, if you prefer to have your server running in a cloud rather than on your on-premises infrastructure.

Timeline

In May this year we first introduced Altova MobileTogether at TechEd in Houston, TX. In July we launched beta 1 of the MobileTogether Designer. In August we launched the beta 1 version of all the MobileTogether clients in the respective App Stores. And last week we launched beta 2 of MobileTogether Designer, Server, and the Apps. We expect MobileTogether to be commercially available later this fall.

Getting Started

You are invited to participate in the beta 2 and try it for yourself.


Just download the MobileTogether Designer from our website, download the MobileTogether Client from the App Store on your device, and you can be up and running and have your first solution on your device in 1-2 hours. Then, when you want to scale out to have your colleagues run it on their devices, you can download MobileTogether Server so that others can connect to it.

XBRL Table Linkbase Editor and Layout Preview

This week we launched our new Altova version 2015 product line, including new versions of XMLSpy, MapForce, all the other MissionKit tools, and all Altova server products.

One of the cool new features in XMLSpy 2015 is the real-time XBRL Table Linkbase layout preview. The XBRL Table Linkbase specification provides a mechanism for taxonomy authors to define a tabular layout of facts. The resulting tables can be used for both presentation and data entry.

However, XBRL Table Linkbase is a fairly young specification, so not many published XBRL taxonomies include Table Linkbase definitions yet. This is where XMLSpy can greatly help: in this video I will give you a quick demonstration of how to add a Table Linkbase to an existing XBRL extension taxonomy, using an XBLR filing that was submitted to the SEC as an example:



Learn how the graphical XBRL Table Linkbase editor in XMLSpy makes it easy to define XBRL tables for the presentation of multi-dimensional XBRL data. You can determine whether your table produces the desired results in the real-time XBRL Table layout preview, which is new starting in XMLSpy 2015.

How to download and process SEC XBRL Data Directly from EDGAR

Earlier this year I presented a webinar for XBRL.US where I demonstrated how you can use the vast number of XBRL filings that have been submitted by public companies to the SEC and are available for free to download from the SEC's EDGAR system:



Since then I've occasionally received requests for the slides used in that webinar, and the slides are available on SlideShare now.

In addition, several people wanted to see and reuse the complete Python scripts that I demonstrated in the webinar, so I have now uploaded those and published them in a new GitHub repository:

https://github.com/altova/sec-xbrl

These scripts are available under an Apache 2 license and require Python 3 as well as RaptorXML+XBRL Server installed on your machine. For more details, please see the README file published on GitHub.

Computer brain surgery or How to performa a RAIDectomy

I have an old 2010 Mac Pro in my home office that is my main photo editing machine (using Adobe Photoshop CC and Lightroom 5). It also serves as a remote desktop terminal to my office PC. It has 2 Intel Xeon CPUs with 6 cores each running at 2.93 GHz and 32GB of RAM, so even by today's standards, 4 years later, it is quite a powerful machine.

At least in theory it should be. Back when I bought the machine I thought it would be a good idea to get the Apple RAID card and 4 drives with 2TB each and set them up in a RAID 5 array for data protection, giving me a total usable 5TB of disk space, which I  set up as a 2TB boot drive and a 3TB drive for data.

Apple RAID Card and Drives
Apple RAID Card and Drives

That RAID card, however has been giving me nothing but trouble in those four years, and it got so bad this summer that it was time for a radical move: RAIDectomy!



The issues that I experienced with the RAID card were the following:

  • Every so often the RAID card would complain that the on-board battery was not fully charged, and would disable the write-cache, resulting in a severe performance hit that slowed down the entire machine to a crawl.
  • Every couple of months the RAID card would enter a mode called "Battery reconditioning", where it also disabled the write-cache for a day just for good fun, and there was no way to stop or postpone that process. If you wanted to get any work done that day, you were out of luck.
  • About every 3-4 months, the RAID card decided that it was time to rebuild the RAID, so it went into a 48 hour mode of scratching all the disks 100% and the computer was unusable during that time.
  • Then the RAID card informed me that the battery was dead and disabled the write-cache permanently last year.
  • Even after replacing the RAID card battery back then, these issues did not go away, but were rather just suppressed for 4-5 months, before they resurfaced.

And now the RAID battery has begun a few weeks ago to give me the impression that it was going to die again soon, so I decided to take a more radical approach this time and get rid of the RAID card once and for all.

I also decided to replace the hard drives and get something with a bit more speed and less wear and tear to reduce the risk of data loss, so I bought a Crucial 1TB SSD drive and a new Seagate 3TB 7200rpm hard drive both with SATA interfaces. I figured I would use the SSD as my boot and application disk, and the larger 3TB hard drive for photo storage. To ensure that the smaller 2.5" SSD drive would properly fit in the 3.5" bay, I also purchased a conversion bracket.

So the process I had in mind was to copy the data over from my two old RAID logical drives to the two new drives and then remove the four old drives and the RAID card.

Easier said than done...

Obviously the first step was to do an extra backup (in addition to the TimeMachine/TimeCapsule network backup that was always running). So I connected my 4TB external USB 3.0 hard drive and started the copying process - only to realize that Apple in their infinite wisdom only equipped the Mac Pro with USB 2.0 ports and the only fast external port on that machine was a FireWire 800 connection. Of course, nobody still uses FireWire on this planet and no external hard drive in my collection supported it, so I had to wait 9½ hours for 3 TB of data to copy over to that external drive using USB 2.0.

This also meant, that my original plan to restore the data onto the new disks from the external drive was going to be more painful than I was willing to entertain.

So I decided to do a 2-step approach. The Mac Pro luckily has two 5.25" bays for optical drives and I had only one of them filled, so I had a SATA connector available on the inside of the chassis that was not controlled by the RAID card. This allowed me to connect the new 1TB SSD inside of that bay and just let it sit there without any screws attached. Upon powering up the machine, I used Apple's Disk Utility to format and partition the new drive, and then planned to use the "Restore" function inside that software to clone all the data and the recovery partition from my old boot volume on the RAID onto the new SSD boot disk. Indeed, there were many online support discussions that I found that praised the ease of using the Apple Disk Utility for that process.

What all these discussions and help forum posts failed to mention is that you cannot use that process to clone the active startup disk on a Mac. There is simply no way to do it with the built-in tools.

After a bit more research I found a nifty utility called Carbon Copy Cloner that promised to do exactly what I needed, and it offered a free 30-day trial, so I downloaded it and gave it a spin. Indeed, it was not only able to properly copy my entire boot volume from the RAID to the SSD, it also correctly copied and built the recovery partition for MacOS X. Huge tip of the hat to this software, and after I saw it working so flawlessly, I did, of course, purchase a license.

After the boot disk was cloned, the next step was to repeat the same process with the data drive: I disconnected the SSD, put the new 3TB drive into the 5.25" bay, used Carbon Copy Cloner to copy all the data over, and then removed it again.

Now it was time to perform the actual RAIDectomy and remove the 4 original drives and extract the RAID card. That process went very smoothly, and I was also able to quickly mount the 2 new drives in the main drive brackets and insert them so they connected directly with the backplane.

I was pleased to see the machine boot properly from my new drive, and even more excited to see the vastly improved speed of everything. This four year old Mac feels like a new machine now.

Since there was no good articles online on how to remove an old Apple RAID card, I figured I'd share my experience here - in case anybody else out there is contemplating getting rid of their RAID card.

Obviously, the entire process would have been much smoother, had Apple actually supported USB 3.0 in that machine rather than FireWire 800 as the only high-speed external port. Even more important, a product like the Apple RAID card should never have been sold in the first place. It was poorly designed, suffered from battery issues, and slowed down the machine at random times outside of the user's control.

The (Broken) Promise Of Wearables

This week the Moto 360 became available and sold out in record time. Next week Apple is rumored to introduce their iWatch along with the next generation iPhone. And Samsung just announced their next generation Gear watch last week that is expected to ship at the same time as the Galaxy Note Edge.

It is clear that Wearables hold a lot of excitement and a lot of promise, and obviously capture people’s imagination. But let’s pause for a moment first, and talk about past experiences with Wearables:

Wearables on a table

Among the many tech gadgets that I’ve acquired in recent years, I also purchased the first Samsung Gear watch when it came out last year, I bought the Google Glass Explorer Edition, and I bought the Fitbit fitness tracker.

Each device held a unique promise that it would make life easier and would add useful functionality. And each device ended up breaking that promise in subtly different ways.

My experiment with the Galaxy Gear smartwatch was very short-lived. The essential promise was to show important information about incoming emails or messages without the need to have to reach into your pocket and get out your phone. However, it wasn’t ubiquitous. Apps on your phone had to be written to be compatible with the watch. And for the most important communications medium – email – all the watch would tell me is that a message had arrived from somebody. No subject line. No relevant content summary. It was essentially useless.

The Fitbit fitness tracker held a simpler promise: wear it all day and all night and track your health. Except that it is so small that I lost it about 3-4 times during travels and only rediscovered it again when I emptied the suitcase after the end of the trip. Here it was the need to go to your computer and sync it there that broke the promise of ease of use. And the tracking information was primitive at best – a simple step counter. I gained no useful benefit from the device that would really help with my weight loss efforts.

Google Glass was perhaps the biggest disappointment of the three. The promise of having an augmented reality experience sounded great. But all it really did was provide notifications about messages, news, and occasional information from other Glass apps about restaurants, sightseeing spots, and other trivia. And after a short while any person wearing Google Glass was labeled a Glasshole and privacy concerns soon resulted in sings being posted everywhere that Glass was not welcome.

In the end, all three devices suffered from a lack of really useful and unique features that actually provided a real-world benefit combined with the need to be constantly charged using various different charging devices and having a fairly short battery life. The day came when I forgot to charge them and then left them at home. Then the day came when I forgot where I put the charger. And now they’re all sitting in a drawer somewhere…

So I look towards the new generation of Wearables coming out this fall with a bit of cynical skepticism: how long until they, too, end up in a drawer somewhere?

In the end, there are only two devices that I carry with me all the time and every day:

  • My watch is a Seiko Astron Kintaro Hattori Limited Edition. It is a solar-powered watch that has a built-in GPS receiver and synchronizes time with the atomic clocks aboard the GPS satellites. And it also automatically adjusts your time-zone when you have landed – based on the GPS position. It never needs to be charged and it does one thing extremely well that I care about: tell accurate time.
  • My phone is a Samsung Galaxy Note 3 (soon to be replaced by the Note Edge). I’ve gone through many smartphones over the years, and it is my favorite so far. I hate its battery life. I hate that I have to charge it. But it does so many things so well in just one device, that it’s worth the hassle. And I use it all the time.

That last point is, perhaps, the biggest question about Wearables that we should ask ourselves: if we already have our phones in our hands all day anyway, what is the additional benefit of a wearable device?

Isis Mobile Wallet silently goes nationwide

It appears that without much fanfare the Isis Mobile Wallet has just expanded from the initial test markets of Austin and Salt Lake City to a nationwide rollout - at least for AT&T and American Express customers.

Originally announced on July 30 this year, the nationwide rollout of this NFC-based mobile payment system was revealed to be planned for "later this year". Apparently that day is today, since I was able to go to an AT&T store in New York City this morning and replace my original SIM card with a new SIM card with "Secure Element", which is a prerequisite for the Isis Wallet app. Once that SIM card was installed in my Samsung Galaxy Note 3, the Isis Wallet App (downloaded from the Google Play Store) allowed me to add my American Express card to my mobile wallet:

Galaxy Note 3 w/ Isis Wallet

Registering a new account with the Isis wallet app took a few minutes, as did activation of the credit card in the wallet, which required logging into my American Express account, but once that one-time setup process is complete, starting up the app is fast and you can secure the app with a customary PIN code and can pick how many minutes the app will allow access until the PIN code is required again.

You can also use the Isis website to find out where you can use the Isis wallet today and it is pretty much any cashiers' credit card terminal that shows one of these contact less payment symbols:

Contactless payment symbol

I was pleasantly surprised that in Manhattan there are literally thousands of stores already supporting the Isis wallet and I did my first test purchase at a Walgreens at Union Square. The checkout process worked smoothly, I just waved my Galaxy Note 3 at the contactless scanner while the Isis app was up, and the NFC chip in my phone transmitted the payment information to the checkout terminal, which processed my payment.

I've waited for NFC chips to be available in our smartphones for quite a while and it is a great pleasure to see mobile payments finally becoming a reality! Now we just need more stores to use contactless credit card terminals so I can finally leave my real wallet at home and use my mobile wallet everywhere!

Big Data analysis applied to retail shopping behavior

Everybody knows that online retailers like Amazon track customer behavior on their website down to every last click and then analyze it to improve their site. But when it comes to regular retail locations collecting detailed customer data by tracking their every move, people seem to be surprised, and sometimes even outraged…

Tracking Shoppers in Retail

It is somewhat ironic that we are used to being tracked online, but when customer tracking - sometimes even based on the very smartphones we carry in our pockets - hits the real world, privacy concerns abound. Interestingly, the same systems have been used for years to prevent theft, and nobody seems to have a problem with that. But once Big Data gets collected and is analyzed for more than just theft prevention and is utilized to analyze shopping behavior and improve store layouts, things get a bit murky on the privacy implications.

The NY Times has a nice article about this today, including a video that shows some of the systems in action. Very cool technology is being used from video surveillance to WiFi signal tracking, and I guess this is really just the tip of the iceberg.

It will also be interesting to see how the privacy implications around Google Glass play out in the next couple of months. If the government can track and record everybody and if business can track and record their customers, then why shouldn't ordinary people also be allowed to constantly record and analyze everything happening around them?

When George Orwell coined the phrase "Big Brother is watching you" in his Nineteen Eight-Four novel, the dystopian vision of a government watching our every moves seemed to be the epitome of an oppressive evil. Nowadays, privacy concerns have certainly evolved over the past decade to the point where video cameras on street corners are taken for granted in many democracies and I'm sure we'll see a continued evolution of our understanding of privacy in the years to come.

Additional Coverage: Techmeme, Marketing Land, iMore, Business Insider, The Verge

Zero-day exploits, spies, and the predictive power of Sci-Fi

Reading the NY Times over coffee this morning, I noticed the article "Nations Buying as Hackers Sell Flaws in Computer Code" which details how nations (and, in particular, their secrete service organizations) are now bidding for and buying zero-day exploits from hackers and security experts worldwide.

Certainly a very timely article, as the world still comes to grips with the evolving role of the NSA and what we've learned in the aftermath of the Snowden leaks. It also reminded me of a Science Fiction series I read in the late Nineties and turn of the century: Tom Clancy's Net Force.

TomClancy's Net Force

Set in 2010 this was a gripping story about a new fictitious FBI division created to combat threats in cyberspace. The storyline quickly evolved from criminal investigations into cyber espionage and cyber warfare. These were the days of the early web and people still used AltaVista as a search engine - so a lot of the ideas in Net Force seemed pretty far out back then.

Interestingly, in the real world, in 2010 the US Army activated their Cyber Command.

And when people talk about Cyberspace in the media today, let's not forget that that term, too, was coined by Sci-Fi authors such as Vernor Vinge and William Gibson in the early Eighties. Like many other geeks of my generation, I devoured those books back then.

Musing about these things over coffee on a beautiful Sunday morning reminded me of an interview I gave to Erin Underwood at the Underwords blog a year ago, in which we talked about the importance of Sci-Fi for young adults and the oftentimes predictive powers of Sci-Fi literature…

The end of an era: PCWorld magazine stops print circulation

It seems logical that computer magazines would be the first to go. After all, computer geeks are the proverbial early adopters and have long since moved on to consuming news in a more timely fashion: online magazines, technology blogs, and up-to-the-minute real-time news on Twitter. Personally, I stopped reading print magazines and newspapers over three years ago. In an always-on always-connected world, where your smartphone provides you with instant access to everything, a daily print publication brings you yesterday's news. And let's not even talk about weekly or monthly print publications.

12123313

Over the past decade I've seen countless tech publications get thinner and thinner from issue to issue and then just disappear. Some of them make a successful transition to an online magazine, and some don't. Interestingly, however, there is one computer-focused print publication in Germany that has managed to still stay relevant: c't Magazin. For some reason they've been able to keep and even grow their readership well into the 21st century.

Don't get me wrong, I still like journalistic excellence and that's why I subscribe to several online news sources that provide more of a well-researched and insightful commentary on the news:

I read these on whichever device I'm currently working on, be it the laptop, tablet, or smartphone - usually over a cup of coffee in the morning or while munching on a sandwich for lunch - and I intentionally include UK and German publications as well as AlJazeera to get a more balanced global view.

But for up-to-the-minute news I rely on Twitter as well as news alerts from Reuters, the Associated Press, and intelligence alerts from Stratfor, plus the usual geek-focused blogs, such as Engadget, Gizmodo, etc. and Techmeme as a blog aggregator.

One could, of course, argue that the era of computer magazines had ended much earlier already, when BYTE ended circulation in July 1998. But that would be dating myself…

Additional coverage: Techmeme, The Verge, TIME, ZDNet

Password Security and Keeping your Data Safe

If you are using a password that is 8 characters in length (or shorter) you just lost the game. And I'm not talking about well-known passwords, such as "password", "monkey", "qwerty", or "12345678". This machine here is part of a cluster of 25 GPUs (Graphic Processing Units) and can crack any 8 character password of any complexity in less than 6 hours:

GPU Cluster

As reported on the Ars Technica blog today, researchers have built a Linux-based GPU cluster that can do a brute-force attack on the NTLM cryptographic algorithm at the heart of the Windows login authentication that can try and astounding 958 combinations in just 5.5 hours. At a speed of 350 billion guesses per second, it can crack any password of 8 characters or less in length without resorting to dictionary-based attacks.

Combining such power with existing dictionary based cracking algorithms can possibly crack even longer passwords in a similar time.

The machine was unveiled by Jeremi Gosney at the Passwords^12 conference in Oslo, Norway, last week. The same machine can make 63 billion guesses per second against password hashes computed using SHA1 - a very widely used hashing algorithm.

How secure is your password?

The reality is that most people still use incredibly weak passwords. The 25 Most Popular Passwords of 2012 are well-documented, as are the 10,000 Top Passwords of 2011. If your password is on either of those lists, you should stop what you are doing right now and go change it. Seriously. All of these well-known passwords as well as any word that appears in a dictionary is highly susceptible to hacking.

Up until a little while ago the common recommendation was to add a few numerical digits and maybe a special character or two to the mix and that would usually result in a pretty safe password. Most sites also require users to pick a password of 8 characters of length (or more) and people usually stick with 8. But that is simply no longer sufficient, as any password 8 characters in length can now be hacked within 6 hours with a brute-force attack.

However, the solution is fairly simple: just by doubling the password length from 8 to at least 16, the duration required to crack the password by the new GPU cluster or similar machines increases from 6 hours to 138 billion years. Even assuming reasonable advances in processor power over the next couple of years, that should make the password pretty safe for the foreseeable future.

If you want to see how (in)secure your old password was, you can use this service. But please make sure you change your password afterwards!

In addition to these thoughts about password length and complexity, it is also important to realize that sooner or later most online websites end up being hacked and all their passwords being stolen (see, for example, the LinkedIn Password Hack in June 2012). Therefore, it is vitally important to minimize the damage and not reuse your passwords on multiple sites.

Ultimately, however, a password alone cannot ever be 100% secure. In addition to hacking in its various forms, any password is also susceptible to phishing attempts, trojans, key-loggers, and other approaches that compromise its security. The only proven approach to really keep a system secure is based on a technology called 2-factor authentication where you need to provide at least two pieces of information to access a system: for example, something that you know (password) and something that you have (secure token).

A lot of these topics have also been discussed in various newspaper articles and blog posts recently and I have provided links to the most useful articles at the bottom of this blog post.

Recommendations

Here is my own personal list of measures that help me keep my passwords and data more secure - these are based on my own approach that I've developed over time, so feel free to adopt any of those for your needs as you see fit:

  1. If an online service offers 2-factor authentication, I always take advantage of that - especially for sensitive information, such as online banking, investments, etc. but I also use it for DropBox, my Google account, or even for Facebook.
  2. All passwords need to be 16-20 characters length at a minimum and include at least 6 numeric or special characters. This makes them relatively uncrackable, provided that one doesn't include any common words from the dictionary. I try to stay away from common recommendations and password-generation patterns, such as taking the first character of each word in your favorite song lyrics or similar approaches. If a pattern has been described somewhere you can rest assured that hackers know about that pattern and can tweak their algorithm to crack it.
  3. I use different passwords for all sites - not a single password shared amongst multiple sites.
  4. For all online services I use computer-generated random passwords with a length of 16-20 characters or longer - depending on what the website allows - and these passwords use at least six numeric or special characters. For example, such a password might look like this: [mLzJKf1j7cP3n|B!8@WJw
  5. I use a password-management application to generate and keep track of all these random passwords. There are many popular such applications on the market and after some research and testing I found 1Password to be the right solution for me, since it is available for Windows, Mac, iOS, and Android.
  6. My master password for the password-management software is somewhere between 25-35 characters in length and uses more than eight numeric and special characters. Nothing in this password is susceptible to a dictionary-based attack, so it should withstand all current cracking capabilities.
  7. I store all my sensitive information and financial data in an encrypted file and keep it safe by storing that file on a USB drive. I use a href="http://www.truecrypt.org/">TrueCrypt as the encryption software of choice, because it is again available on multiple platforms. The password for my encrypted data is again highly complex and fulfills all of the requirements outlined above.
  8. To guard against catastrophic failure of the password-management software, a printout of all passwords is stored in my safe.

With this approach I feel that I have done a pretty good job of making a hackers' life rather difficult. Is it 100% secure? Probably not, and I constantly tweak my system as new information surfaces and we learn about new improvements in processing speed or cryptography advances.

What is your strategy? Let me know your thoughts here on the blog or via Twitter or Facebook comments…

Further reading:

Tools I use:

Ingress - an AR-MMOG created by Niantic Labs at Google

I don't often write about games on my blog, but this one deserves an exception, because it is extremely innovative, unique, and a harbinger of things to come. On November 15 Google launched a closed beta of Ingress, a sci-fi themed game currently available only on the Android platform.

Ingress defines a new category of game that could probably be best described as AR-MMOG (Augmented Reality - Massively Multiplayer Online Game). The basic premise is that an alien influence called Shapers are trying to control human thought and are entering the world through portals that are often associated with historically significant locations, statues, or public displays of arts. These portals are associated with Exotic Matter (called XM in the game) that needs to be collected to energize the player as well as the portals.

Players must move through the real world and visit these portals with their GPS-equipped Android smartphones to play the game.

The objective is to hack the portals, link different portals, and create so-called control fields by forming triangles of linked portals. After completing a few training missions, players must choose a faction and either side with the Resistance, who are trying to protect mankind and prevent further Shaper influence, or side with the Enlightened, who consider Shaper influence to be beneficial and usher in the next logical step in the evolution of mankind.

Enlightened vs. Resistance

I was very happy to receive my invite to the closed beta on November 21 and found some time on the morning of Thanksgiving Day as well as on Black Friday to play the game on my Galaxy SIII. Doing so allowed me to take some extensive walks on both days and burn off a lot of the food calories that would have accumulated otherwise.

Playing the game is extremely addictive. I decided to join the Resistance and explored the available portals in and around Marblehead on the first day. Capturing my first few portals was fairly easy, but then I encountered some Enlightened portals that gave me a good challenge right away. Most of the portals are directly taken from the Historical Marker Database, so you learn a lot more about the history while playing the game. I also found that having a car to drive to neighboring towns and some remote portal locations is a huge bonus - especially when you get to deploy higher-level portals that have a range of several kilometers available for linking.

For example, on one of my excursions I took a stroll through downtown Salem in my quest to capture more portals and found one above the statue of Roger Conant:

Approaching a portal in Salem

By hacking and capturing one portal after the other, I was able to not only collect the required items for linking portals together, but also the necessary weapons for attacking portals of the opposing faction. And it didn't take long for me to eliminate all of the Enlightened influence in my area and connect several of the portals in Marblehead to create the necessary control fields that are then shown on the display of the Ingress app:

Control Fields in Marblehead

As I leveled up, I was able to create more powerful portals that allowed linkages over several kilometers distance and so I used Black Friday for some further excursions into Salem as well as trips to Swampscott and Nahant that allowed me to create a much larger field to protect all the inhabitants in my immediate vicinity:

Larger area control fields

Now it is only a matter of time until the Enlightened students at MIT try to increase their influence further north and will begin their attack on the North Shore. I am sure a battle of epic proportions will ensue in the days to come:

Larger Boston Area Intel

Ingress is extremely well done for a beta version of a game. I can only assume that Google has done some extensive internal testing before opening up the beta to people outside. And the combination of GPS, mapping, the historical marker database, and the many different web properties (see list below) combine to provide a truly addictive game-playing experience.

Even before you get immersed in the actual gameplay - and while you anxiously await the arrival of your invitation to participate in the beta - there are several websites that provide hints at the background story, videos, and artwork by fictions characters that appear to exhibit signs of Shaper influence.

One can easily see how Google's Project Glass will be used in a future version of this game that takes augmented reality game-play to a whole new level…

Obviously, there are also some privacy implications in such kind of gameplay and several bloggers have already questioned Google's motives in creating this game. Allegations range from creating an optimized database of walking paths for further enhancing Google Maps to more sinister data collection for advertising purposes.

Be that as it may, for the time being I will continue participating in the beta for a very simple reason: the game is actually a lot of fun to play!

Further information on Ingress can be found here:

Also see blog posts on AllThingsD, Engadget, pandodaily, The Verge, TechCrunch, and others…

P.S. Don't ask me for an invite, as I don't have any to give away, sorry!