Category Archives: Architecture

Powershell Shenanigans

Lately I have been working on a job position, mostly orientated towards the system administration side. As a result of that I am working into creating some tools that help the everyday life of a developer.

Unfortunately, because that company has a legacy product (all have that, even startups!) I also had to provide some tooling for that too. As you may guess, that product was running into Windows servers. And here’s when the story starts getting interesting.

Powershell was very popular in the past… Yet now its becoming a nuisance…

It is a Microsoft product!

From: The dev community

Yes, yes! I know. Half of the people you might ask around they are going to come back at you with that phrase. It isn’t open source, and it is a Microsoft product. And when they utter that phrase you can see their facial expression, saying it with such aversion, as if Microsoft is the devil himself, and they are the twelve apostles!

Sure, that product has its issues, but it also has some (if not very good in my humble opinion), documentation online: https://docs.microsoft.com/en-us/powershell/

Really powerful stuff, coming in from Microsoft, and the chaos that is called Windows OS… (let’s not forget Vista, Windows Millennium, Internet Explorer, and all those “successful products” we were forced to use…).

To cut to the chase

My main point is that Powershell, strives to offer some tools needed for system administrators to administer their Windows Installations. And it fails, unfortunately. As a product it is so chaotic and big, with so many different pathways you can end up being caught at. Especially if someone compares that with the simplicity of the unix counterpart. Even though, they have tried to be more effective and direct. I mean, in every modern installation of Windows 10 all you have to do is WinKey + type “Power” + Press Enter, and you are within a cli where you can start executing commands. Quite fast, and user friendly.

The problems start when you try to consolidate stuff. When you want to write different scripts that perform different tasks. When you are trying to include that awesome script you wrote, and its very essential to the grand scheme of your process. Thats when things, start to get interesting, and frankly, I think Microsoft hasn’t really put things into perspective when they started implementing that product.

For example:

I was asked from the security team to lock down user permissions into a given server. In order to do that the best way possible (since we do not want not our users to at least have a the required permissions they actually need) to create another role (or user) and assume that role to run stuff. Since the setup was old, the only option I had was use a user to do that. Which lead to the following hidden default decisions by Powershell.

I had to use this :

$username = "domainuser_name"
$securePassword = "secure_hash" | ConvertTo-SecureString
$credential = New-Object System.Management.Automation.PSCredential $username, $securePassword

In order to assume the user and run the commands I wanted. Only problem was that I had to somehow encrypt the secure_hash using this function:

ConvertFrom-SecureString

If you visit the documentation, and not read carefully the description (especially the last part of it) and jump to the usage, you will try to call it somehow like this:

$SecureString = Read-Host -AsSecureString
$StandardString = ConvertFrom-SecureString $SecureString

The above will echo something like this:

Write-Host $StandardString
70006f007700650072007300680065006c006c0072006f0063006b0073003f00

for the password: powershellrocks?.

Now if you take that $StandardString and you pass it in the ConvertTo-SecureString function then that will create a System.Security.SecureString object (whatever that is, I couldn’t properly inspect it…), which can be passed along as a credential to log in to Windows computers.

Now this works just fine if you run all those commands in the server you want to work with. The problems start later, when you re-provision that server (and of course you have saved that $StandardString since , the user hasn’t changed credentials, and you need that to log him in). If you hadn’t payed attention at the last subsentence of the description:

If no key is specified, the Windows Data Protection API (DPAPI) is used to encrypt the standard string representation.

Surprise!

A quick google search of Windows Data Protection (DPAPI) and you will see its nothing more that a key storage engine that saves a butch of keys from the user. So when you are calling the function without the -Key argument, then a different key is used coming from DPAPI. And, of course the error you are getting back if you call the reverse function isn’t that descriptive either:

ConvertTo-SecureString : Input string was not in a correct format.

Was it too hard to get a message like, key is invalid or decryption failed? Especially since they are using by default the hidden Windows key?

Unfortunately this goes across all PS

The guys who originally wrote Powershell, didn’t want to adhere to Explicit is better than implicit, as this is a principle used quite often in software development (see this). As being a primarily a linux user, I always loved the tools that MS was providing to Windows users. And frankly this was amazing in the past. But unfortunately, as time goes by, I am realising that the decisions they had to take while implementing those tools, weren’t as objective as the respective open source ones.

Or even when the open source guys didn’t do such a good job, and ended up creating non-useful tools, those tools were becoming deprecated quite fast. This cycle didn’t happen with Microsoft. A product had to go live, and if that product covered the needs of the users, was in fact irrelevant to whether it had to go live or not… (sounds familiar?)

Request Loop

It’s been a while since I last posted…

There is always a reason for that. My reason was a sum of many different variables. Just as the great mentor said, luck is the sum of many coincidences, that’s what happened in my case as well.

Where do I begin?

Jobwise: Capital controls, working day and night, a lot to do and no time to do it…

Blogwise: I had a very strange setup with my blog (and a very very outdated one I might add). Since I am using Heroku, they decided to change their stack and migrate from Cedar 10 to Cedar 14 . Ok I said what the hell lets do it.

Alas, I had a serious problem with libssl0.98 which was built inside my php module and was not supported in Cedar 14. (whoever wants to do the upgrade have a look here first).

Long story short I fixed it, and I also found that many posts I did with various hacks for the pg4wp plugin were incorporated into a single release from kevinoid : here

I will contribute also into some changes that have to be taken into account since the module is quite old and I have previously stated that it’s not at all well written.

That’s not the main point of this post though.

I wanted to share an experience I keep coming across lately.

Now according to popular trends we are experiencing (and will experience in the future) a huge bloom of the microserviced architecture. This guy here explains how and why they decided to go for the microserviced architecture.

I agree. There are many benefits when having a monolithic single (and obsolete at times) repo for web applications. It is a nice solution when your company is scaling, and you have to maintain a lot of different parts. Especially if you have different teams and each team wants to “do their own thing” about a solution.

However it’s not the solution to Everything!

I will elaborate more:

I recently had to debug an http step based procedure (client requests this page, books this ticket, goes there, etc.) that was using 3 different instances of different technologies over http. The one was python and wsgi, the second was php with apache and the third one was ruby with unicorn.

Try to debug this. I dare you. Seriously. I had in my local setup all 3 different instances running with 3 different IDE’s and all running their debugger. Ok, ok you say that Docker will simplify the installation. I agree it does, but it does not help the debugging at all.

The most important thing though isn’t the debug/testing of many different apps over http.

It’s the HTTP by itself.

And believe me, I have seen a lot of “Senior” Devs falling into the same trap of API’zation and doing over and over the same architectural error.

The Request Loop

You won't guess how many time's I've seen this happening...
You won’t guess how many time’s I’ve seen this happening…

Consider the following diagram:

This is the actual loop - when one request is still open, another comes along, and things get messy...
This is the actual loop – when one request is still open, another comes along, and things get messy…

The Browser  sends a request to the Frontend app. Now the Frontend App could forward it (or change it a bit) to the Backend App.

In our setup the backend app was a PHP app.

Now since PHP by default does not support threading (not pthreads), each HTTP request is a different PHP thread, served via apache.

This is very complicating, since you keep a connection (process) open and you open another one which could (at some point maybe) rely on data from the first one. You cannot access that data in between processes.

Not to mention that, you can not either debug this thing, since you insert a break point in the first request procedure, and the second request (which happens a few ms after) is being served without the debugging stopping at that point.

My point is that when you decide to go Microservice’d

Try to avoid request looping, when you need to do something that is synchronous. Or, use something different. Do threading. Use a message queue, or something else.

You will be surprised how much time you will spend trying to debug and understand what is wrong in this set-up.

I will close with the following meme:

Some people, when confronted with a problem, think, “I know, I’ll use threads,” and then two they hav erpoblesms.