Desktop applications in a world dominated by the WEB

 
In tune.jpg
 

My day usually begins with a cup of coffee while checking trending topics on the web. I then play some music from Spotify’s web player while I get ready for work. When I sit on my desk at the office I open several Web based applications essential to my work and start doing stuff on my ToDo list.

One day I got thinking that I can optimize my workflow and I installed an app monitoring software to see where I spend most of my time. Unfortunately, I picked a subpar application for that and the end results weren’t that helpful… that did make me think though… 85% of my time was spent in the web browser. 5% in desktop apps and 10% in messengers and my email client. 85 % of my time spent in the browser? That didn’t seem right… I dug deeper and did a week where I consciously monitored where I was and what I was doing. Well it turns out the stats didn’t lie. Everything I used was mostly based in the browser. CRMs, text editors, corporate portal, certification materials and all kinds of things essential to my work were web based. It seems everything is Web and mobile based nowadays.

I actually compiled a full list of the desktop tools I am actually using:

  • Visual Studio

  • Visual Studio Code

  • Eclipse, albeit a bit modified

  • Messengers (Skype, Telegram, Slack)

  • Outlook

  • Browser

  • Terminal

That’s it....

Judging by that I started asking myself, why would anyone want to use desktop software nowadays? How do we actually integrate desktop applications to back-end on the cloud? Do we even need desktop software at all? Now, I was a bit sceptical as to the answers of those questions, but then I started thinking about it…

Offline availability

 
cord-cut.jpg
 

Everything is great and functional until your Wi-Fi drops… And depending on where you work, that is a real scenario. If everything we do is Web related, we inevitably rely on our internet connection, and sure, you can always fallback to a mobile hotspot from your phone but, if you have to do something urgent and you have to use a lot of data, that isn’t a very good option. Some web technologies try to solve this problem, but still have a long way to go before they are actually useable in this scenario.

Having desktop software with an implemented caching mechanism and synchronisation logic once back online is a real benefit. Think source control. Everything kept under source control is cloud hosted (or on premise, depending on your source control company policies). Nevertheless, we mostly work on files offline and once our work is complete we sync back to the server.

Now, of course, the real world scenario is a bit more complicated than that, especially when you have to implement the sync logic yourself, but still, with enough effort and good app architecture, there is no reason to not do it.

Having the option to do your work even when not connected to a network is a luxury nowadays, but doesn’t have to be.

Security

 
security.jpg
 

Covering the offline benefits, we can’t go without mentioning security. Every system connected to the internet is vulnerable. No exceptions. Having the ability to sync up on your terms, is invaluable for companies where security is number one concern. Of course, this isn’t the most common scenario, but when it comes to big amounts of corporate or government data, this may be the only way to go.

Performance

 
engine.jpg
 

Correct me if I’m wrong, but I am yet to meet a video or image editing professional using a web based tool… The simple answer to that is, you can’t really use the full hardware power via a web page.

It would be like using an electric scooter in a drag race...

Sure browsers are getting better and better at syphoning system resources, but when it comes to good hardware acceleration, desktop wins every time.

Of course, that doesn’t mean you can’t use online tools to complement such software as well. Most of the software nowadays is Cloud enabled, most of the time for saving settings and access to a cloud based file system, but that isn’t the limit. You can still benefit from the numerous Web based services out there if you design your desktop app architecture in a meaningful way.

I am sure there are some other reasons as well, but using my software developer’s prism of view, that’s quite enough for me to not dismiss the need of desktop software and even encourage it in some cases.

App architecture that works

 
Architecture.jpg
 

Several times I mentioned that you have to have a good architecture in order to be able to make your desktop app work. I lied… You have to have it to make any of your apps work. Sure, you can half-ass it and just go with the flow. Long term, though… You’ll be really happy you invested some time in architecture design when you were building the foundations. Now, don’t get me wrong… There isn’t a perfect solution to any problem, and what works in one case may not work in other. Having said that… Building an application, supporting multiple interfaces, interconnected with one another? Well, there aren’t many ways you can do it and be happy with what you have.


A real life example of one that didn’t (work)

A while back, we had to create a complex Scoring system for Rhythmic Gymnastics. What we thought was the hard part, wasn’t the problem at all… Of course, as usual, that clicked pretty late…

The system consisted of a Database, a Data Access Layer making sense of the data, a… Business Logic Module(here we went in an extremely wrong direction), and a lot of connected interfaces:

  1. Desktop module - admin panel that does everything…

  2. Several mobile interfaces, in essence - webviews:

    1. Viiew with live scores being populated after every performance

    2. Input panel for judges to enter scores for a performance

    3. View for Hall monitors - displaying current player and his results to the audience

  3. TV graphics data export - a read only… call it API… to be consumed by software in a mobile TV station in order to broadcast the scores with the TV feed

We thought we actually did a pretty good job with the architecture we did…

Spoiler alert: we didn’t…

Here is what we came up with (... please don’t show this to anybody… just in case I decide to look for a dev job again):

 
Screenshot 2019-06-19 at 14.12.46.png
 

You might notice the Business logic module I mentioned is nowhere to be found on the diagram… That’s because it’s shared between the Admin Panel (Desktop module) and the Rest API.

Neither of those is the place for that. And I can’t stress that enough…

So… Several issues with this design.

  1. Database and a client live on the same physical machine. That’s a recepie for disaster. We had to debug something on a live competition. No one thought that we will actually block everything that way. But that doesn’t even begin to cover the problems with such a design.

  2. We did repositories over EntityFramework… Not that it can’t be done, but we absolutely didn’t do it right. At a certain point we had a CommonRepository that was used for everything.

  3. The business logic was embedded in the interfaces rather than having it in a separate module. Sure some stuff are interface specific, but there should be a distinction. We saw the harsh reality when we tried to certify the system, and had to cover everything with tests.

  4. The desktop module had a direct connection to the database rather than use the same one as the other UI modules, say… the REST API. This meant we couldn’t decouple the Desktop module from the db server even if we wanted to.

I won’t get deeper into the problems with this design. Instead, lets look at…

A real life example of one that did (work)

Several years passed since the abomination you just read about. And we got a bit less stupid. As the consulting business died, we found a cool product idea that we wanted to pursue. This time though, we did our homework… And by homework I mean - we watched several back-end architecture lectures, read a couple of articles, and we combined everything that we saw into an architecture that proved to be quite robust.

 
Screenshot 2019-06-21 at 10.31.38.png
 

That’s cool, but why does this work? Simple - separation of concerns and loose coupling. I have been reading about those things ever since I started reading about writing quality code back in university. Well right hand on my heart - I didn’t appreciate it until we actually did it on an architectural level. Everything was encapsulated separately. It didn’t matter what UI you would use, simply because everything was working through a REST API. All the business logic was segregated separately. The data access layer was self contained.

It was easy to test.

It was easy to maintain.

It was easy to scale.

It was easy to extend.

It was a breeze to work on.

Sure, it took a while to get it going, but after we had the foundations, everything was smooth and easy to work with.

The end result of the App architecture was something like this:

 
Screenshot 2019-06-21 at 15.36.27.png
 

The inner workings were developed by following Jason Taylor’s Clean Architecture talk: https://bit.ly/2N1p0sD

I strongly suggest you go through that. It looks a bit too complicated, but trust me, after trying it - the complications go away quite quickly.

Handling REST on desktop

desk.jpeg

Nowadays handling REST anywhere is a must have skill. Fortunately, every major development platform gives a pretty straight forward way to do that. I am going to look into how to do it in .NET, although the approach can easily be adapted to almost anything you might use, and it’s definitely not restricted to desktop. Especially in .NET. You can check a really good overview of those here: https://bit.ly/2BI3C4T  

We did play with this quite a lot, but the solution we came to was quite simple. We had a separate project for a “RESTClient”, that basically included an Interface and the default implementation for that interface. After that, the only thing you have to do is inject the correct implementation where you use the IRESTClient interface.

What that gives you is the ability to be flexible.

You have to be offline? That’s fine, just switch to an OfflineRESTCientImplementation.

You have to test a ViewModel? Just inject a TestRESTClient so you save on bandwidth or API calls charges.

You can expand on the calls to keep analytics, to batch updates, to filter calls, sync updates when back online… Anything specific to the current UI you are using that is not related to the actual business logic executed on top of the data.

I have created a small sample with the approach. It’s not very detailed, but it will light the trailhead, so you know where to head out to. You can access it here: https://bit.ly/2x7aSmy

So….

Summary

There still is a need for desktop software, and I don’t see it disappearing anytime soon.

Struct your business logic in a cool way that will enable you to scale easily? Check  Jason’s talk - https://bit.ly/2N1p0sD

Use a cool and flexible approach to consume REST API’s on Desktop (and everywhere) Check my sample - https://bit.ly/2x7aSmy


Let me know if found a different approach that worked for you.

Resources

Key materials I used for the post and demo:

https://swapi.co/ - an API for the Star Wars universe

https://github.com/olcay/SharpTrooper - a .NET client tailored to SWAPI

https://bit.ly/2N1p0sD - Clean Architecture with ASP.NET Core 2.1

https://bit.ly/2x7aSmy - Samples Repo