quick and dirty web screenshot tool

In a recent test, I had a need to throw a list of websites at a tool to get a screenshot of each site, so I could figure out what they were quickly.

There are several tools out there that can handle this sort of thing, but none of them were “quick and dirty”, which is what I really wanted: I didn’t want to spend a bunch of time configuring things, and I didn’t want to send a ton of command line arguments, I just wanted to dump a text file of URLs (obtained via nmap scanning) and point a script at it and have it go grab a snap of whatever was running at the URL.

Since I couldn’t find something like that, I just wrote my own. It turns out that all it takes is around 50 lines of JavaScript.

If you ever have a similar need, feel free to check out https://github.com/rossja/url2screen and see if it does what you want. It worked perfectly for me =)

Dockerized Vault with Consul Backend

I recently wanted to get a working vault instance running. Since my goal is to sharpen my devops skillset, I had some specific features I wanted to make sure I had nailed down:

  • Run vault in server mode, rather than dev
  • Run it in Docker, ideally using docker-compose
  • Have other containers access the vault for secrets
  • Use a persistent storage mechanism for the backend
    • I didn’t want to use files for this

To meet the storage requirements, after looking into it a little bit it became apparent that Consul was probably the right move (as of this post, it’s the only database engine that’s officially supported by Hashicorp for Vault — which makes sense since they make both products).

It turns out that getting Vault and Consul up and running – and talking to each other – is not as straightforward as it could be. I found lots of examples, but most of them did not work anymore, and all of them used version 2 of the docker-compose spec. After much trial and error and a ton of failure, I finally got things running successfully. If you want to try it out, you can check it out at https://github.com/rossja/docker-vault-consul/

One thing to note: as configured in the repo the Vault container talks to Consul over HTTP. Since the secret is encrypted inside vault, no sensitive data is being passed in plaintext over the network (in theory), but it’s still not the best-practice to use HTTP here.

The problem is that there’s a bit of a chicken/egg problem with using HTTPS… specifically: you need to securely get certificates and privatekeys into the Vault and Consul containers in order for TLS to work properly. But doing that means either:

1) checking the certs into a repo, or
2) passing them into the container via something like a volume or ENV var.

Neither of these are ideal.

I’ll probably ultimately end up opting for passing them in via a bootstrap volume of some kind, but am still thinking over how best to handle this.

Simple NodeJS Web Proxy

I was on a project recently where we were performing a network penetration test against an internal network, remotely. To facilitate this, we shipped a server to the client so they could install it into their datacenter. We then log in to this host, and perform our testing from it, rather than sending consultants onsite. This is super efficient, and cost-effective. The host we send out in these cases is referred to as a ‘jump box’ — because it offers a jumping-off point into the network.

For whatever reason, this time the jump box did not have the full set of tools installed. In many cases, this wouldn’t be a terrible problem, we’d just install the tools we need and move on. However, in this case, the client was performing egress filtering — meaning we had no way to get out to the internet from our jump box. This led to a dilemma: how can I access the remote services (like web sites) easily? Usually I’d set up tinyproxy or something similar, and set up a tunnel over SSH to access it. Unfortunately, I didn’t have tinyproxy or any other web proxy servers installed, and had no easy way to get one.

However, I did have nodejs installed on the box: which is quite useful for this type of thing! I did a quick search for nodejs proxies, and found a suitable base code over here. (Remember, I can’t get out to the internet from the jump box, so I need to limit my proxy code to only core nodejs modules, I can’t ‘npm install’ anything.)

That post is quite old (6 years!) and some things have changed since it was written. For one thing, the http.createServer() method has been replaced by http.request(). The sample code there also contained some things I didn’t need, like a blacklist of sites to prevent access to, but it had some things I definitely did want (like a whitelist of IPs that are allowed to talk to the proxy. That’s an important factor when doing this sort of test, if you are going to open up services on a client network, you need to take steps to minimize security risks they may cause. Restricting access to this proxy to only the localhost of the jump box helps me do that.)

You can check out the final results at my pentools github repo.

Once I had the code in place, I simply opened up an SSH tunnel to the jump box, and set up port 8080 on my laptop to tunnel to port 8080 on the jump box, like this:

ssh -L 8080:localhost:8080 user@jumpbox

Once I was logged in, I configured Firefox on my laptop to use localhost port 8080 for a web proxy, and I could now point my browser to the client’s internal network addresses and browse their websites from my browser, through the jump box proxy.