Creating a Python Weather App for Terminal

I’m spending waaaay too much time in terminal these days and I’m slowly replacing functions I use on my Google Home Mini with commands on my MacBook, because, as always, why not?

The title says it all. I can now run a weather command in terminal to grab the weather of my home town, including wind speed / direction, temperature and general description (clouds, rain, snow, sun). It’s a pretty crude program, it’ll probably crash if you specify an unknown location, but thats okay. Don’t specify an unknown location and it’ll be fine!

So where to begin on this one. First of all, I needed to find a free weather API provider. Easy stuff, thanks Google. OpenWeather provide just that. Of course there are limited API requests, but I don’t think i will be exceeding those any time soon. From there on, it’s just as easy as sending a GET request to fetch some JSON and then parsing it to be displayed how I see fit.

The only technicality involved was converting a wind direction from degrees to an actual direction. But a spot of math and a big List later, simple.

So let’s get started. First of all, I had to make an account on https://openweathermap.org which is so self-explanatory I won’t even entertain explaining it. It takes a few hours for your API key to become active though, I kinda forgot until the evening so I don’t know exactly how long it takes.

Once thats done, take a look at their API documentation to see what kind of requests you can make. I just went with the first one, get current data by city name.

What the documentation doesn’t say for whatever strange reason is that you need to include your API key as a parameter with the name appid.

So let’s get to writing some python code. I’ll be using the json and requests python libraries. If you don’t have them installed, just run:

pip install json
pip install requests

I honestly can’t remember if json is something you have to install or if its available by default. But here’s the script, nonetheless.

#! /usr/bin/python import json import requests import sys # api-endpoint URL = "http://api.openweathermap.org/data/2.5/weather" location = "city_here"; api_key = "api_key_here" units = "metric" # metric for celsius, imperial for fahrenheit, no param for kelvin temp_unit = "celsius" # descriptor for string PARAMS = {'q': location, 'appid': api_key, 'units': units} # using the requests library, we make a get request with the given URL and PARAMS r = requests.get(url=URL, params=PARAMS) # the json response is stored here data = r.json() direction_list = ["N", "NNE", "NE", "ENE", "E", "ESE", "SE", "SSE", "S", "SSW", "SW", "WSW", "W", "WNW", "NW", "NNW", "N"] # is json key present def ijkp(json, key): try: buf = json[key] except KeyError: return False return True # the JSON structure can be found at https://openweathermap.org/current # set up some vars to contain the data I want to show if ijkp(data, "main") == False: print("Critical data missing. Terminating gracefully") sys.exit() if ijkp(data, "weather") == False: print("Critical data missing. Terminating gracefully") sys.exit() if ijkp(data, "wind") == False: print("Critical data missing. Terminating gracefully") sys.exit() temp = int(data['main']['temp']) if ijkp(data['main'], "temp") else int(-9999) feels_like = int(data['main']['feels_like']) if ijkp(data['main'], "feels_like") else None temp_low = int(data['main']['temp_min']) if ijkp(data['main'], "temp_min") else None temp_high = int(data['main']['temp_max']) if ijkp(data['main'], "temp_max") else None humidity = int(data['main']['humidity']) if ijkp(data['main'], "humidity") else None weather_main = data['weather'][0]['main'] if ijkp(data['weather'][0], "main") else None weather_desc = data['weather'][0]['description'] if ijkp(data['weather'][0], "description") else None wind_speed = data['wind']['speed'] if ijkp(data['wind'], "speed") else None wind_speed_mph = int(wind_speed * 2.237) # calculate wind direction from degrees dir_index = int(int(data['wind']['deg']) / 22.5) if ijkp(data['wind'], "deg") else None wind_dir = direction_list[dir_index] print("The current temperature in %s is %i degrees %s but it feels like %s degrees %s" % ( location, temp, temp_unit, feels_like, temp_unit)) print("The weather is mostly %s: %s" % (weather_main, weather_desc)) print("There is a %imph wind in a %s direction" % (wind_speed_mph, wind_dir)) print("The current humidity is %i%%" % (humidity))

So the code is a bit rough. It’s a hackjob. I might tidy it up in the future to traverse the JSON and only extract values if they are there and only display them if they are there, but for now, this does what I want. It crashed when I ran it a minute ago because data[‘wind’][‘deg’] was missing, so I just threw together a quick, hacky solution to avoid that for now.

def ijkp(json, key): try: buf = json[key] except KeyError: return False return True

This function’s sole purpose is to return false if a key doesn’t exist, and true otherwise. I use it in every variable declaration that relies on fetching information from the JSON, to ensure the program doesn’t crash. There are also several if-statements to ensure the core data is present in the JSON data and the program will terminate if it is missing.

 

In order to calculate the wind direction from meteorological degrees, I take a List containing all the possible directions:

direction_list = ["N", "NNE", "NE", "ENE", "E", "ESE", "SE", "SSE", "S", "SSW", "SW", "WSW", "W", "WNW", "NW", "NNW", "N"]

North is at both the start and end because both 0 and 360 degrees can represent north, so there are 16 unique elements in this List.

360 / 16 = 22.5

360 degrees divided by 16 unique possibilities = 22.5

So I fetch the wind direction in degrees and then divide that by 22.5. The wind direction is explicitly converted to an int just to ensure it remains a whole number. As is the result of the next calculation, to force round the number. Let’s assume the degrees value is 180, a South direction.

180 / 22.5 = 8.

and thus we fetch the value at the 8th index of the list, which is “S”.

Finally, as always, I move this script to my /scripts/ directory, remove its extension and make sure it is permitted to execute, so I can run it from anywhere on my system, within terminal.

Although I simply threw this code together quickly, the functionality I sought is there, and the real reason behind it is because I’ve been neglecting Python for too long. All the years I’ve been writing code, I’ve barely touched this language. Because I started with PHP and then C++, I am very comfortable with a C-style syntax, so when indentation and colons become the norm, its very foreign to me. So, the more I write, the more comfortable I get. And that, ladies and gentlemen, is why this script now exists on my laptop!

A simple nodeJS REST API

The Background

For one of my home projects, I require an API service for my Angular SPA to interact with. I could’ve opted for C++ or Java but the turnaround on writing code in nodeJS is so much quicker, not to mention its efficiency.

The project I’m working on is a project task manager. There are dozens out there already but I need to brush up on my Angular and nodeJS knowledge anyway and figured this would be a good opportunity to do that. Plus I only want it locally, on my local network server, along with the SPA. Then my finished project will provide me with exactly what I need. Nothing less, nothing more.

I wanted to be able to store a bunch of data and access it when and where I need to. So my first port of call was to figure out what kind of data I wanted to store.

Well, it’s a project task manager, with emphasis on estimated task durations, deadlines and order of importance and with that in mind, here’s what I decided I need:

  • Project name
  • Project deadline
  • Project description
  • Project color*
  • Task name
  • Task deadline
  • Task description
  • Task estimated duration
  • Task color*
  • Task priority, normal | low | high

*The tasks and projects will have an assigned color because I intend on showing them in a bar graph, one of those horizontal ones with the cute colors.

Next, I wanted to use that information to design a database to store all of the values in. I will call the database TaskMan, because, why not. I could’ve drawn up an entity-relationship diagram but truth be told, when it’s a project for myself, I just open up a text editor, figure out what I need and then go for a loose design from there. It’ll only be myself using it so I’m not too bothered about having the perfect database, I’m not a database administrator, after all. I do, however, write all of my database creation code in a text file and save it as .sql just because thats wise. I used to use phpMyAdmin for managing MySQL but times have changed, my friends, times have changed.

 

RESTful

I won’t go into a great deal of detail with this because there’s a wealth of information about it online already. But REST refers to REpresentational State Transfer. As in, the state is transferred with each request, as opposed to being stored by the server. So the server remains stateless, it does not keep sessions for each request etc.

There’s four main components to using a RESTful API:

  • Create – POST
  • Read – GET
  • Update – PUT
  • Delete – DELETE

Commonly referred to as CRUD, these refer to the types of HTTP request that one can make. I make use of these here.

The Database

Alright, I’m just going to dump some tables here which details the database tables I decided to create, and then I’ll include the creation SQL for them also.

 

This is the Projects table:

Field Name Field Type Length Description
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
name VARCHAR 56 NOT NULL UNIQUE
description TEXT
start_date DATETIME
deadline DATETIME
completed BOOLEAN / TINYINT
color VARCHAR 7
priority_id INT 3 FOREIGN KEY

 

And this is the almost identical Tasks table:

Field Name Field Type Length Description
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
project_id INT NOT NULL FOREIGN KEY
name VARCHAR 56 NOT NULL UNIQUE
description TEXT
start_date DATETIME
deadline DATETIME
estimated_duration INT 3
completed BOOLEAN / TINYINT
priority_id INT 3 FOREIGN KEY

Finally, this table is the Priority table:

Field Name Field Type Length Description
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
name VARCHAR 20
color VARCHAR 7

 

I could definitely get some normalisation in the works here but like I said, not a DBA and its for a home project only, so, I’m happy with that. I decided to add a color record to the Priority table too but I’m not sure if I will use it or not, yet.

 

Following is the MySQL code to create the above tables and database:

CREATE DATABASE TaskMan; CREATE TABLE priority( id INT NOT NULL AUTO_INCREMENT, name VARCHAR(20) NOT NULL, color VARCHAR(7), CONSTRAINT priority_pk PRIMARY KEY(id) ); CREATE TABLE projects( id INT NOT NULL AUTO_INCREMENT, name VARCHAR(56) NOT NULL UNIQUE, description TEXT, start_date DATETIME, deadline DATETIME, completed BOOLEAN, color VARCHAR(7), priority_id INT(3), CONSTRAINT projects_pk PRIMARY KEY (id), FOREIGN KEY (priority_id) REFERENCES priority(id) ON DELETE SET NULL ); CREATE TABLE tasks( id INT NOT NULL AUTO_INCREMENT, project_id INT NOT NULL, name VARCHAR(56) NOT NULL, description TEXT, start_date DATETIME, deadline DATETIME, estimated_duration INT(3), completed BOOLEAN, priority_id INT(3), CONSTRAINT tasks_pk PRIMARY KEY(id), FOREIGN KEY(priority_id) REFERENCES priority(id) ON DELETE SET NULL, FOREIGN KEY(project_id) REFERENCES projects(id) ON DELETE CASCADE );

Now, you can either log in to MySQL via terminal or, if you have phpMyAdmin set up, you can log in there and paste it straight into the SQL code executor that they have. I use MySQL via the terminal, although I definitely use phpMyAdmin for quickly inserting and examining data. The order of execution is important because of the FOREIGN KEY references.

As a side note, ON DELETE SET NULL means set the tasks.priority_id to NULL if a priority that is referenced is deleted. ON DELETE CASCADE means delete this record if the projects.id reference is deleted. It just keeps the database tidy.

 

The Project

That’s the database set up, at this point I began my nodeJS project in WebStorm. I just titled it TaskMan and created an empty project. From there, I opened the terminal within the WebStorm IDE and ran a couple of commands. The first being npm init. I keep all values default except the entry point, I change that to app.js instead because I have a whole lot of index.js files in my project, so I prefer the entry point to be descriptive and different.

Next I ran some more npm commands to install some node modules to the project:

npm install express
npm install mysql
npm install moment
npm install morgan -D
npm install http-errors

Express will let me get a server up and running really easily, mysql is, well, for mysql database interaction and moment is a fantastic library which, in their own words, allows you to “Parse, validate, manipulate, and display dates and times in JavaScript”. Morgan is a HTTP error logger I am using for development; the -D parameter is the same as –save-dev, just shorter. So the library won’t be included in production. Finally, http-errors just makes life easier when dealing with http errors, I use it to create a 404 Not Found error.

Alright, with all those dependencies installed, time to churn out some code. I created an app.js file in the root directory of my project. In this file, I included morgan, express and http-errors. At first, I just set up express to listen on port 3000 and return a typical “Hello, world” to ensure everything is working so far:

let createError = require('http-errors'); let express = require('express'); let logger = require('morgan'); let app = express(); app.use(function(req, res, next) { res.send('Hello, world!'); }); app.listen(3000, "0.0.0.0");

Running this code, I directed my browser to http://localhost:3000 and saw “Hello, world!” as expected. So now express is up and running, my journey continued!

I wanted to ensure I could handle urlencoded and json data, and as such I implemented the built-in middleware provided by the Express library to do so. The following middleware are based on the body-parser library. I also implemented the morgan logger and http-errors at this point.

So below let app = express();, I added some more app.use() calls to do this. See the full code below, everything new is bold:

let createError = require('http-errors'); let express = require('express'); let logger = require('morgan'); let app = express(); app.use(logger('dev')); app.use(express.json()); app.use(express.urlencoded({ extended: false })); // catch 404 and forward to error handler app.use(function(req, res, next) { next(createError(404, 'rip')); }); // error handler app.use(function(err, req, res, next) { // set locals, only providing error in development res.locals.message = err.message; res.locals.error = req.app.get('env') === 'development' ? err : {}; // render the error page res.status(err.status || 500).send(err.message); }); app.listen(3000, "0.0.0.0");

The http-errors library in action:
app.use(function(req, res, next) { next(createError(404, 'rip')); });

This piece of code takes all incoming requests that reach it and creates a 404 error, with the help of the http-errors library, and passes that to the next() function which will pass the request down the line to the next handler in the line. The next down the line happens to the be final handler, which sets some locals values and then returns the error code and message.

 

app.use(function(err, req, res, next) { // set locals, only providing error in development res.locals.message = err.message; res.locals.error = req.app.get('env') === 'development' ? err : {}; // render the error page res.status(err.status || 500).send(err.message); });

At this point, all I will ever get from this code, no matter what calls I make to what endpoints, is 404 errors. This is because of the call to createError(404, ‘rip’));. It’s the first request handler that is encountered and with that said, order is important here. Even if I fully implemented my whole API by now, if that block of code remains first in order, nothing but 404s will ever be returned. That’s not entirely true, you could throw a different error with a different handler further down the line, but order is important. Requests take a path through your code and you should ensure that it’s the right path.

So the final piece of code to add to app.js is something to handle my requests. I created a new directory in the root of my project called controllers, and in this directory I created an index.js file to handle some controlling. It doesn’t do anything just yet but let’s implement that in app.js for now so we can close off that file and not return to it for a while.

let createError = require('http-errors'); let express = require('express'); let logger = require('morgan'); let controllers = require('./controllers'); let app = express(); app.use(logger('dev')); app.use(express.json()); app.use(express.urlencoded({ extended: false })); app.use(controllers); // catch 404 and forward to error handler app.use(function(req, res, next) { next(createError(404, 'rip')); }); // error handler app.use(function(err, req, res, next) { // set locals, only providing error in development res.locals.message = err.message; res.locals.error = req.app.get('env') === 'development' ? err : {}; // render the error page res.status(err.status || 500).send(err.message); }); app.listen(3000, "0.0.0.0");

Here, I created a controllers constant and told the express app to use whatever code is in that directory to process incoming requests. If that code cannot process requests, an error is thrown. I am yet to update the rest of the code to use http-errors, though, so instead, it simply sets an appropriate status and returns an appropriate message, as opposed to creating a new http error with the library and passing it along the handler chain.

Controllers

The controllers directory provides a controller for each endpoint in the API. So if I make a call to http://domain:3000/octopus, I will have a controller called octopus_controller.js to handle that request.

I had two endpoints in mind, one for the projects and one for the tasks, and so I created a controller for each of those endpoints. Now lets take a look at index.js inside the controllers directory:

let express = require('express'), router = express.Router(), projectsController = require('./projects_controller'), tasksController = require('./tasks_controller'); router.use('/projects', projectsController); router.use('/tasks', tasksController); router.get('/', function (req, res) { res.status(403).send("403 Access Forbidden"); }); module.exports = router;

The code here is relatively simple. I define a handle to the express module so I can access the router component, I define my controllers and then I tell the router to use the appropriate controller for the relative endpoints. I also add some code to return a 403 Access Forbidden error in the event that the directory is accessed directly. module.exports = router; is then called to return the code in this file as an object to app.js when it requires the controller directory.

I won’t go through both controllers because they are somewhat similar, so let’s just take a look at projects_controller.js:

let express = require('express'), router = express.Router(), projectsModel = require('../models/projects_model'), general = require('../helpers/general'); router.post('/', createNewProject); router.get('/', getAllProjects); router.get('/:projectId', getProjectById); router.delete('/:projectId', deleteProject); router.delete('/', deleteProject); router.put('/:projectId', updateProject); function getProjectById(req, res) { let required = ['projectId'], params = req.params; if (!general.checkIfObjectContains(params, required)) { res.status(400).send("Missing Parameter"); } else { projectsModel.getProjectById(params) .then(data => { if (data.toString() !== '') res.status(200).send({data: data}); else res.status(404).send('404 Not Found'); }) .catch( // Log the rejection reason (err) => { console.log(err); }); } } function getAllProjects(req, res) { let required = [], params = req.params; if (!general.checkIfObjectContains(params, required)) { res.status(400).send("Missing Parameter"); } else { projectsModel.getAllProjects(params) .then(data => { if (data !== null) res.status(200).send({data: data}); else res.status(404).send('404 Not Found'); }) .catch( // Log the rejection reason (err) => { console.log(err); }); } } function createNewProject(req, res) { let required = ['name', 'description', 'start_date', 'deadline', 'color', 'priority_id'], params = req.body; if (!general.checkIfObjectContains(params, required)) { res.status(400).send({data:"Missing Parameter"}); } else { projectsModel.newProject(params) .then(data => { if (data !== null && data.affectedRows > 0) { res.setHeader('Location', '/projects/' + data.insertId); res.status(201).send(null); } else { res.status(200).send({data:'unable to add record'}); } }) .catch( // Log the rejection reason (err) => { console.log(err.toString()); }); } } function updateProject(req, res) { let required = ['name', 'description', 'start_date', 'deadline', 'color', 'priority_id', 'project_id'], params = req.body; if (!general.checkIfObjectContains(params, required)) { res.status(400).send({data:"Missing Parameter"}); } else { projectsModel.updateProject(params) .then(data => { if (data !== null && data.affectedRows > 0) { res.setHeader('Location', '/projects/' + data.insertId); res.status(201).send(null); } else { res.status(200).send({data:'unable to add record'}); } }) .catch( // Log the rejection reason (err) => { console.log(err); }); } } function deleteProject(req, res) { let required = ['projectId'], params = req.params; if (!general.checkIfObjectContains(params, required)) { res.status(404).send("404 Not Found/Missing Parameter"); } else { projectsModel.deleteProject(params) .then(data => { if (data !== null && data.affectedRows > 0) res.status(200).send(null); else res.status(404).send(null); }) .catch( // Log the rejection reason (err) => { console.log(err); }); } } module.exports = router;

This follows very much the same format as the controllers index.js. I require express so I can access the router component and then tell it which endpoints to use for which type of request.

router.post('/', createNewProject); router.get('/', getAllProjects); router.get('/:projectId', getProjectById); router.delete('/:projectId', deleteProject); router.delete('/', deleteProject); router.put('/:projectId', updateProject);

This tells the router what to do with each endpoint. Notice that a few of the endpoints are just defined as a forward slash, that’s because this code is called within the index.js controller code which already defines the /projects endpoint. So when you see a forward slash as an endpoint here, it actually means /projects/. Each of these router function calls refer to the type of HTTP request it will handle, POST, GET, PUT and DELETE. The endpoint is defined and then a function provided so it knows what to do with that request.

Let’s take a closer look at this line: router.get(‘/:projectId’, getProjectById);. The :projectId means we are expecting a parameter in the URL after the slash, so the endpoint would be, for example, /projects/1 of a type GET. The parameter name used here is referenced in the function provided to handle the request.

function getProjectById(req, res) { let required = ['projectId'], params = req.params; if (!general.checkIfObjectContains(params, required)) { res.status(400).send("Missing Parameter"); } else { projectsModel.getProjectById(params) .then(data => { if (data.toString() !== '') res.status(200).send({data: data}); else res.status(404).send('404 Not Found'); }) .catch( // Log the rejection reason (err) => { console.log(err); }); } }

So this is how all of the request functions appear. They define some constants, required and params, the required contains a list of parameter names that are required, if any, and the params constant contains the request parameters, if any.

A custom written function contained within a helper class is used to determine whether or not any of the required parameters are missing from the request and if so, a HTTP 400 Bad Request error code is returned along with an indication as to why.

If no parameters are missing, the code goes on to call a function of the projectsModel class, which handles interaction with the MySQL database. Using promises, the code will either return a status of 200 with the requested data or a 404 Not Found, once the MySQL interaction is completed. Finally, any errors are caught and printed to the console, for now. Errors will be properly logged in the future. And finally, at the end of the code, module.exports = router; is called.

Models

Let’s take a look at the models directory now, which contains all the code to interact with MySQL. This directory doesn’t contain an index.js file because it’s not required. It does, however, contain a mysql_model.js file that is used within each of the models, which are, naturally, projects_model.js and tasks_model.js. mysql_model.js is simply a wrapper to make interacting with the mysql library more simple. I picked up the concept of this code from a previous job and have used it in all my nodeJS projects since, so shout out to ‘Ash’ for originally writing it and giving me the inspiration, knowledge and understanding to reproduce it.

let mysql = require('mysql'), config = require('../config'); module.exports = function() { this.query = function(sql, params) { if(!params){params = []; } return new Promise(function(resolve, reject) { con = mysql.createConnection(config.mysql); con.connect(function(err) { if (err) throw err; }); con.query(sql, params, function(err, result) { if(err) { return reject(err); } else { return resolve(result); } }); }); }, this.Select = function(sql, params) { return this.query(sql, params); }, this.Update = function(sql, params) { return this.query(sql, params); }, this.Insert = function(sql, params) { return this.query(sql, params); }, this.Delete = function(sql, params) { return this.query(sql, params); } };

First, the mysql library is included, or, required, as is a config file that defines the database connection information. I’ll show that code in just a minute so you can see how it looks. Then we attempt to connect to the database, if an error is thrown it is returned. This is wrapped in a Promise to allow easy query chaining. Finally, some specific functions are defined which simply provide a user-friendly means of running queries so instead of calling query(sql, params); I can call Select(sql, params); or Insert(sql, params); for better readability.

The config file looks like this, with sensitive values removed:

module.exports = { mysql: { host: 'host_here', user: 'user_here', password: 'password_here', database: 'TaskMan' } };

So that’s the mysql wrapper detailed, lets take a look at projects_model.js to see what that model is doing. I’ll focus on the getProjectById(); function, since that’s the one I singled out above.

let mysql = require('./mysql_model'), db = new mysql(); module.exports = { getProjectById({projectId}) { let query = "SELECT * FROM projects WHERE id=?", params = [projectId]; return db.Select(query, params); }, getAllProjects() { let query = "SELECT * FROM projects"; return db.Select(query); }, newProject({name, description, start_date, deadline, color, priority_id}) { let query = "INSERT INTO projects (name, description, start_date, deadline, color, priority_id) VALUES (?, ?, ?, ?, ?, ?)", params = [name, description, start_date, deadline, color, priority_id]; return db.Insert(query, params); }, deleteProject({projectId}) { let query = "DELETE FROM projects WHERE id=?"; params = [projectId]; return db.Delete(query, params); }, updateProject({name, description, start_date, deadline, color, priority_id, project_id}) { let query = "UPDATE projects SET name=?, desription=?, start_date?, deadline=?, color=?, priority_id=? WHERE id=?"; params = name, description, start_date, deadline, color, priority_id, project_id; return db.Update(query, params); } };

So I include the mysql wrapper that I just talked about, and instantiate a new object of that type, called db. Then the rest of the code is exported as a module. It simply declares all the functions I need to interact with the database. Each function takes an object literal as a paremeter, contains the required SQL and returns the result of the query that is executed by making a call to the appropriate method on the db object.

getProjectById({projectId}) { let query = "SELECT * FROM projects WHERE id=?", params = [projectId]; return db.Select(query, params); }

So here, the getProjectById({projectId}) function is defined. It then defines the query to run, and the params that are required. Then it returns a call to db.Select(query, params);. It’s that simple.

Summary

Alright, let’s wrap it up. I’ve talked about creating a RESTful API that interacts with a MySQL database, using nodeJS. It follows an MVC architecture, without the V, of course. I’ve discussed creating a database, an express nodeJS server, a mysql wrapper, endpoint controllers and models for those controllers. I’ve shown you how I create a RESTful API using this technology, and I hope it helps somebody out there that’s interested in doing the same thing.

The project structure I used is:

Project Root --app.js --controllers ----index.js ----projects_controller.js ----tasks_controller.js --models ----mysql_model.js ----projects_model.js ----tasks_model.js --config ----index.js

A final note, if you ever decide to use this in a production environment, adding authentication middleware is a relatively easy step also. You’d just have to use an existing node library like Passport or Auth0 and implement a registration and authentication endpoint too.

Thanks for stopping by!

Custom Shortcode and Quicktags for blog posts

I use inline code styling for a lot of my posts and I have been asked on several occasions how so. Well, I added some custom shortcode to my wordpress HTML editor. I’m a developer, and I very much dislike the visual editor, and so I use the HTML editor. So, custom shortcode is very convenient for me. Now, I’m going to explain how I achieved this, so y’all can get the same functionality too!

First of all, I created a child theme of the theme I am using. This means getting all up in your website files and making a new directory, so if you’re not comfortable with that. Abort now! I’m joking, follow along. Be Brave. You’ll never learn anything new if you don’t try.

Alright, child themes. These let you edit your current theme without losing the modifications when your theme is updated. WordPress have a brilliant tutorial on how to do this here, but I’ll explain it briefly anyway.

You’re going to need a new directory, the name being currentTheme-child. So if you’re using Chocolate Cookie then you will have a chocolate-cookie directory. To child this theme, create a chocolate-cookie-child directory.

The path is generally /wp-content/themes/

That’s step one, done!

Now, we gotta create a style.css file, this proclaims its love for the parent theme, so WordPress knows where to find the good stuff. The actual theme. The very first thing in this file should be a block comment to declare several attributes.

/* Theme Name: Chocolate Cookie Child Description: Chocolate Cookie Child Theme Author: Tim Talbot Author URL: http://timtalbot.co.uk Template: chocolate-cookie Version: 1.0.0 */

The only required attributes here are Theme Name and Template. These tell WordPress the name of your child theme and the template (parent theme) it should use within your child theme.

Alright, so far we have:

  • A new directory called chocolate-cookie-child
  • a style.css file within that directory
  • a block comment at the top of style.css to define some required attributes, Theme Name and Template

All we need to do now is create a functions.php file to enqueue the style within WordPress. This ensures that WordPress loads and presents the style.css style, to implement any of our custom CSS, after it has loaded the parent theme styling.

<?php add_action( 'wp_enqueue_scripts', 'my_theme_enqueue_styles' );

This is the beginning of our functions.php script. We’re calling add_action() and passing it two parameters, the first is a string to denote which action the second parameter, a function, should be hooked to. The second param, the name of a function within our functions.php script. I left mine defined as-is, per the WordPress tutorial.

Next, we’re going to implement that function and so this code follows the add_action() call:

function my_theme_enqueue_styles() { $parent_style = 'chocolate-cookie-style'; wp_enqueue_style( $parent_style, get_template_directory_uri() . '/style.css' ); wp_enqueue_style( 'child-style', get_stylesheet_directory_uri() . '/style.css', array( $parent_style ), wp_get_theme()->get('Version') ); } ?>

Here we provide a $parent_style handle, the easiest way to find this is look at the source of your blog with your parent theme activated. In my experience, it’s always been theme-name-style, though. So, scroll down your blog source code or CTRL+F to search for .css, you’re looking for a css include that looks like this:

<link rel='stylesheet' id='chocolate-cookie-style-css' href='http://timtalbot.co.uk/wp-content/themes/chocolate-cookie/style.css?ver=5.3.2' type='text/css' media='all' />

Here within the ID attribute, we can see chocolate-cookie-style-css, and so we remove the -style suffix to be left with the handle to our parent style, chocolate-cookie-style.

We then feed this $parent_style handle into wp_enqueue_style() to enqueue the parent style. We then call it again to enqueue our child style. Finally, we close up our PHP script with ?> and we’re good to go. Now we can equip our Chocolate Cookie Child theme within our WordPress dashboard, unless something wen’t wrong, somewhere.

Well, that’s our child theme created! Now we can move on to what we came here for, the shortcode implementation!

First of all, I added some custom CSS to the child style.css file, this dictates how my in-line code will be presented:

mycode { /*border-radius: 5px; -moz-border-radius: 5px; -webkit-border-radius: 5px;*/ border: 1px dashed #969696; background: #f5f5f5; color: #BF4D28; padding-left: 5px; padding-right: 5px; font: Monaco,Consolas,"Andale Mono","DejaVu Sans Mono",monospace; white-space: nowrap; }

Using a custom tag, mycode, allows me to just wrap text in that tag to apply the style. I don’t want to type all of those tags all of the time though, so let’s add a shortcode button above the text editor. For that, back to functions.php.

First of all, we call add_action(‘admin_print_footer_scripts’, ‘add_my_code_tag’);. We put this right after the previous call to add_action(), the one that enqueued our style. It follows the same format, the first parameter is the hook to which the function identified by the second parameter should be hooked to. So we want to inject our custom code on admin pages, and so we use the admin_print_footer_scripts hook. Our function, add_my_code_tag is a function that comes after my_theme_enqueue_styles() function in the functions.php file.

Here is that function:

function add_my_code_tag() { if(wp_script_is("quicktags")) { ?> <script type="text/javascript"> //this function is used to retrieve the selected text from the text editor function getSel() { var txtarea = document.getElementById("content"); var start = txtarea.selectionStart; var finish = txtarea.selectionEnd; return txtarea.value.substring(start, finish); } QTags.addButton( "code_shortcode", "Inline Code", callback ); function callback() { var selected_text = getSel(); QTags.insertContent("<mycode>" + selected_text + "</mycode>"); } </script> <?php } }

Sidenote: There’s nothing worse than badly formatted code, and while writing this post I just noticed that the <code> tag didn’t retain code formatting, so I just added code {white-space: pre; line-height: 1em;} to my child theme style.css to fix that. I also spent half an hour adjusting the <mycode> CSS to tweak the style to match the theme of the blog, since it looked very cool for me until I reloaded the CSS and it picked up some styling that it hadn’t previously, oops!

Back to the matter at hand!

The very first thing we do is check if the quicktags wordpress script is being loaded, since we don’t want to try and access that if it hasn’t loaded yet. If the condition is true, we write a function, in JavaScript to grab the selected text. We use this in the next function we write, callback(), which is used when we create our new quick tags button.

To create a new Quick Tags button, we call QTags.addButton(), part of the WordPress Quicktags API, which requires at least 3 arguments. An Identifier, a display name for the button and a callback or opening tag. Here, I use a callback… called… callback, wow. RIP naming conventions.

Finally, the callback() function which executes when the new Quicktags button is clicked declares a variable which will contain the return value of the first function, getSel(), which is any currently selected text. It’ll then use QTags.insertContent() to insert content at the cursor location. In a typical use-case, this will be at the location of the selected text. The inserted data is the selected text surrounded by the <mycode> opening and closing tags. If no text is selected, it’ll just insert the tags. That’s it, we’re done! If you’ve followed along correctly, we should be good to go!

I suspect we can add some elegance to this, though, so if that no text is selected, it’ll just insert the opening or closing tag. So let’s modify the add_my_code_tag() function to do that. I’m going to add a boolean variable to keep track of whether we want an opening or closing tag, and I am going to add some conditional statements to determine which course of action we seek. The possibilities are:

  • Insert an opening <mycode> tag
  • Insert a closing </mycode> tag
  • Wrap selected text in <mycode> tags

I may be over-complicating things here, but here’s where I’m at:

function add_my_code_tag() { if(wp_script_is("quicktags")) { ?> <script type="text/javascript"> var close = false; //this function is used to retrieve the selected text from the text editor function getSel() { var txtarea = document.getElementById("content"); var start = txtarea.selectionStart; var finish = txtarea.selectionEnd; return txtarea.value.substring(start, finish); } QTags.addButton( "code_shortcode", "Inline Code", callback ); function callback() { var selected_text = getSel(); if(selected_text == '') { if(!close) { QTags.insertContent("<mycode>"); close = true; } else { close = false; QTags.insertContent("</mycode>"); } } else { QTags.insertContent("<mycode>" + selected_text + "</mycode>"); } } </script> <?php } }

and now, all of that over-complicated, unnecessary, extra-work nonsense is out of the way – We can reduce the function to the following code for exactly the same functionality:

function add_my_code_tag() { if(wp_script_is("quicktags")) { ?> <script type="text/javascript"> QTags.addButton( "code_shortcode", "Inline Code", "<mycode>", "</mycode>" ); </script> <?php } }

Because that very same functionality is provided to us by WordPress’s Quicktips API. And there you have it, that’s how to add a quicktags, shortcode button to your WordPress HTML editor, to make life simpler!

Sidenote: just another side note, in case you’re wondering how I type all of these < and > opening and closing tags without them disappearing, I’m using HTML Special Entities. Without spaces: & lt; and & gt; for opening and closing, respectively.

TP-Link Home Control With NodeJS

So, if you can’t tell from my recent posts, I am currently obsessed with remote control. Powering things on and off. I work on my laptop a lot, and typing is a lot quicker for me than moving the mouse around, so I spend a lot of time with terminal open just for the sake of it, even when it’s not in use.

Anyway, I have a Google Home Mini device in my bedroom, along with a tp-link LB130 smart bulb, which Google can turn on and off at my vocal command. It’s pretty nice, but I wanted to be able to type it instead. Enter nodejs.

Why node? Well, the first library I saw when I googled tp-link smart bulb api was a nodejs library. I’m familiar with node, I have it installed and it’s ready to go. So I installed the tplink-smarthome-api from npmjs. That said, there is also a python library available, but I saw it after the fact.

npm install -g tplink-smarthome-api installs the library globally, you can opt to install it locally instead if you prefer.

Then I set up a very crude nodeJS script to test the api. I needed the IP address of my light bulb, first, though. So I logged in to my router settings, looked at the wifi devices and looked for the appropriate one. While I was there I set up some DHCP address reservation, so my light bulb will always have the same local IP address.

let { Client } = require('tplink-smarthome-api'); let client = new Client(); let plug = client.getDevice({host: 'ip.goes.in.here'}).then((device)=>{ device.getSysInfo().then(console.log); device.setPowerState(true);

This code snippet, with the right IP address, simply sets the light to be on. You can change the boolean on line 5 device.setPowerState(false); to false in order to turn off the light, to test. Line 4 prints a bunch of device info, I commented this line out in the next iteration but left it in the script in case I ever wanted to refer to it again.

Now that I could see that the tplink api library works as expected, I wanted to add this node script to my global /scripts/ path, so I can run it from any location on my laptop, within terminal. I also wanted to be able to pass a parameter to determine whether to turn the light on or off. So let’s break that down into two tasks, I’ll start with the command line arguments to set the state of the bulb.

let { Client } = require('tplink-smarthome-api'); if(process.argv.length == 3) { if(process.argv[2].toLowerCase() == 'on' || process.argv[2].toLowerCase() == 'off') { let state = (process.argv[2].toLowerCase() == 'on') ? true : false; let client = new Client(); let plug = client.getDevice({host: 'ip.goes.in.here'}).then((device)=>{ //device.getSysInfo().then(console.log); device.setPowerState(state); }); } else { console.log("on or off, nothing else"); } } else { console.log("arg length mismatch"); }

process.argv[] is an array that contains a list of command line arguments. As with most indexed arrays, it starts at 0 for the first element. The first two arguments are always the node executable path and the script path, so there will always be two arguments even if we don’t provide any ourselves. With that in mind, we get the length of process.argv and compare it to a hard-coded number, 3. The two guaranteed arguments and our own, three.

if(process.argv.length == 3) – if this is true, we’re going to evaluate the third command line argument, the one we provided. We are expecting ‘on’ or ‘off’, and if we have either of these arguments, we’ll set a state variable based on the value, run our code to manipulate the light and pass our state variable to the setPowerState() function.

For convenience and to save writing another if-else block, I used the ternary operator for that on-the-fly conditional assignment.

let state = (process.argv[2].toLowerCase() == ‘on’) ? true : false;

If the argument value is ‘on’ then we set the state to true, otherwise we set it to false; since we already know its value will be ‘on’ or ‘off’ due to the previous if condition. Then we use the code provided in the example from the library documentation to toggle our light. Let’s take a quick look at that code for a second.

let client = new Client(); – we’re creating a new Client() object and assigning it to a constant variable, `client`.

let plug = client.getDevice({host: ‘ip.goes.in.here’}) – this constant is never used, but we use the client object we created above in order to get our device with the `getDevice()` function. We then chain .then() to this so we can run some code once we have got a handle to our device. Also known as a Promise. Within this in-line function, we take our handle to the device and then set its state with the variable we previously created from our command-line argument: device.setPowerState(state);.

So that’s the code. The summary:

  • Check command-line argument count
  • If it equals 3, check that the 3rd argument is equal to either ‘on’ or ‘off’
  • Create a true or false boolean variable based on whether we pass ‘on’ or ‘off’
  • Instantiate a Client() object and get a handle to our device
  • Set the power state of the device

For running this from terminal, anywhere, as with my other scripts, it lives in a /scripts/ directory which is added to the PATH env var. I had to give the script the permission to execute; on mac I type chmod +x light.js. I then remove the extension because I don’t want to type that every time, too. So the js file is now just called light. Make sure you don’t name your files in a way that might clash with other programs either now or in the future.

Finally, I had to add a shebang (#!) to the first line of the file, to tell the system to use nodeJS as the interpreter. This looks like: #! /usr/local/bin/node.

And so, with all of that done, I can open terminal and run light on or light off and it will do exactly as expected, turn that light on or off!

Remote shutdown with C++ on Linux

So, carrying on from my Setting up Wake-on-lan on Ubuntu Server 18.04 LTS article, I decided that I would probably like to shut down my server a little bit more easily too.

In case you’re not sure what server I’m referring to – I run a local headless ubuntu server at home to allow me to work on database driven projects across multiple devices without having to keep local copies of the data on each device. It has your standard LAMP stack on there, nothing too fancy.

Until recently, I was logging in via SSH and running sudo poweroff to shut down the system, not terribly convenient. I figured, if I’m running a python script from terminal to fire up my server, I should probably get another system implemented to run another thing from terminal to shut it down, and so I did.

It’s been quite a while since I’ve written anything meaningful in C++. In fact, the last piece of C++ code I wrote with a purpose was in 2012 when my laptop J key was broke – it kept sending keystrokes without being pushed. So I threw together a low level keyboard hook to discard the key and listen for the period key instead, which it would then replace with a J. Not very convenient but it was a good temporary solution at the time.

Anyway, I digress. So, I could probably have done this in Python but I’m not a Python programmer, I’ve barely even studied it. Shocking, right? I can read it, sure, but not write. So, C++ was a nice alternative since I didn’t want to go through the hassle of installing java on my server for the sake of one task, all that extra JVM overhead was burning my soul. That and my linux distro has C/C++ compilers installed already so no extra work needed there.

 

The Client-Server Model

Everyone with even a slight understanding of the internet is familiar with the basics of a client-server model. The server listens, the client hollers and the server responds. Without getting into the details of protocols, packets and such, that’s the crux of it. With that in mind, it gave me a good place to start.

I was going to need a server, on my server, to listen. That sounds crazy, right? ambiguity at its finest!

What I mean is, I needed a program on my server machine to listen on a particular port for a particular piece of data to trigger a shutdown, I decided to call it the shutdown server since that would be its sole purpose and it describes its purpose perfectly.

The Server

So I’m just going to dump the code here and let you have a read before I explain anything about it:

#include <unistd.h> #include <iostream> #include <sys/socket.h> #include <stdlib.h> #include <netinet/in.h> #include <string.h> #include <linux/reboot.h> #include <sys/reboot.h> int main(int argc, char const* argv[]) { const int port = 8000; bool shutdown = false; struct sockaddr_in address; int new_socket, input; int opt = 1; int addrlen = sizeof(address); char *response = "shutdown command received"; int sockfd = socket(AF_INET, SOCK_STREAM, 0); if(sockfd < 0) { std::cout << "Socket Creation Failed" << std::endl; exit(-1); } if(setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR | SO_REUSEPORT, &opt, sizeof(opt)) < 0) { std::cout << "setsockopt failed"; exit(-1); } address.sin_family = AF_INET; address.sin_addr.s_addr = INADDR_ANY; address.sin_port = htons(port); if(bind(sockfd, (struct sockaddr *) &address, sizeof(address))<0) { std::cout << "Bind failed" << std::endl; exit(-1); } if(listen(sockfd, 3) < 0){ std::cout << "Listen failed" << std::endl; exit(-1); } while(1) { char buffer[1024] = {0}; if ((new_socket = accept(sockfd, (struct sockaddr*) &address, (socklen_t*)&addrlen))<0) { std::cout << "accept failed" << std::endl; exit(-1); } input = read( new_socket , buffer, 1024); if(input <= 0) { break;} if(strcmp(buffer, "shutdown -local") == 0) { send(new_socket, response, strlen(response), 0); shutdown = true; close(new_socket); break; } else { send(new_socket, "hello", strlen("hello"),0); close(new_socket); } } if(shutdown) { std::cout << "Remote shutdown requested" << std::endl; reboot(LINUX_REBOOT_CMD_POWER_OFF); } return 0; }

First of all, we need to create a socket file descriptor to use:

int sockfd = socket(AF_INET, SOCK_STREAM, 0);

if sockfd contains a negative integer, something went wrong. So we check that with an if statement before going any further.

Sidenote: This is a very crude program so it’ll simply quit if an error is encountered

The next section of code relating to setsockopt() doesn’t strictly have to be used, it attempts to force the program to use the defined port. Since my machine is absolutely not running anything on port 8000, I’m happy to make use of the function.

Following, we populate the sockaddr_in struct with some required values (tell it to use IPv4, accept connections from any address and provide the port to listen on).

Sidenote: I’m not worried about accepting connections from any address because my server is configured to listen on LAN only and it sits behind two firewalled routers and runs its own firewall too

The next step is to attempt to bind the socket file descriptor to the given port with the pre-configured struct data, using bind()

bind(sockfd, (struct sockaddr *) &address, sizeof(address)

and exit if it fails. If it’s successful, however, we will attempt to listen(sockfd, 3) for new connections and enter an infinite loop.

The infinite loop is required to continually accept incoming connections and respond accordingly. There are only two responses here. The first will shut down the system if the data “shutdown -local” is received, the other will simply reply with “hello” and then close the connection and await a new one.

If the accept() function is called before the loop, it will only ever accept one connection and never wait for a subsequent connections.

The program is running on linux, so glibc is used to access reboot(LINUX_REBOOT_CMD_POWER_OFF); which will tell the system to power off. This actually requires sufficient permissions to work but I added the compiled program to systemd to run as a service, since I’ll want the shutdown server to be listening automatically as soon as the server is powered on. I’ll explain this a bit more shortly. The service runs with sufficient permissions; otherwise, if you’re testing the code, you’ll probably need to run it as root.

g++ was used to compile.

The Client

The client program will primarily be running from my MacBook so it was compiled on there with g++. It is considerably shorter:

#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <arpa/inet.h> #include <iostream> int main(int argc, char const *argv[]) { const int port = 8000; struct sockaddr_in address; int sock = 0, response; struct sockaddr_in serv_addr; char *data = "shutdown -local"; char buffer[1024] = {0}; if ((sock = socket(AF_INET, SOCK_STREAM, 0)) < 0) { std::cout << "socket creation error" << std::endl; return -1; } memset(&serv_addr, '0', sizeof(serv_addr)); serv_addr.sin_family = AF_INET; serv_addr.sin_port = htons(port); if(inet_pton(AF_INET, "local.ip.here", &serv_addr.sin_addr)<=0) { std::cout << "invalid address" << std::endl; return -1; } if (connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0) { std::cout << "connection failed" << std::endl; return -1; } send(sock, data, strlen(data), 0); response = read( sock , buffer, 1024); printf("%s\n",buffer ); return 0; }

A quick client summary

Similarly to the server, we define our variables and create a socket. If that’s successful, we validate the given IP address and attempt to connect. Providing there are no errors, we send the data to the server. In this case the data is simply a string containing the phrase “shutdown -local” which is a specific string that the shutdown server is listening out for. The client then waits for a response and then displays it before exiting gracefully by returning zero.

This client source code was compiled on my MacBook and placed in a directory that is added to PATH dedicated to custom scripts and programs that I might like to run from terminal, so I don’t have to navigate anywhere or type full path names when I open terminal.

Compiled with: g++ client.cpp -o sds

and placed in the above mentioned directory, I can simply open terminal on my MacBook and type “sds” to shutdown my server. This coupled with the “wol.py script to wake the server means that I can turn it on and off with absolute ease, remotely.

Creating a Service to automatically run the shutdown server on star-up

In the same way as detailed in Setting up Wake-on-lan on Ubuntu Server 18.04 LTS, I created a service to run my shutdown server automatically too.

The service is incredibly basic. I created a file called sdserv.service in /etc/systemd/system which contains the following:

[Unit] Description=Listen for local shutdown command [Service] ExecStart=/home/tim/cpp/sdserver [Install] WantedBy=multi-user.target


The file located at /home/tim/cpp/sdserver is the binary of the compiled server source code above. ExecStart requires an absolute file path.

I then told systemd to refresh its cache of services with systemctl daemon-reload

systemctl enable sdserv.service tells systemd to run the service on start-up and systemctl start sdserv.service starts the service.

and thats it! After putting all that together, I can now run wol.py to turn on my server and sds to turn it off!

MySql Access Denied for User root without sudo

ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) is not something you want to see when you try and login to mysql using the default root account with no password, especially when there is no password, on a fresh install!

So this was the problem I faced when I installed MySql on Ubuntu Server 18.04 TLS.

I SSH into my server since its a headless machine, just sat there with no peripherals connected at all. Just an ethernet cable and power cable. I was beginning to set up the machine as a local network server for a database heavy project I’m working on, so I installed MySql with

sudo apt install mysql-server and immediately tried logging in to create a non-root user for my tasks:

mysql -u root

I typed. It threw me that error above. So I tried it with sudo and sure enough it worked. I’m not entirely sure why, but it did. Perhaps because my machine is referred to as the-lab and MySql just didn’t know who that was, because it isn’t localhost. Who knows.

Anyway, to remedy this particular issue I had to run a series of commands that removed the root user and added it again, after logging in with sudo. The steps were as follows, in case you find yourself in a similarly annoying situation:

  1. sudo mysql -u root
  2. drop user 'root'@'localhost'; This one is a bit scary because, well, you’re deleting the root user. But rest assured you can still continue to run commands after this, I promise.
  3. create user 'root'@'localhost' identified by '';
  4. grant all privileges on *.* to 'root'@'localhost' with grant option;
  5. flush privileges;

Now exitMySql with exit; or CTRL+D and try to log in again without sudo, job done!

Setting up Wake-on-lan on Ubuntu Server 18.04 LTS

Alright, where to begin?!

When I work from home, which is almost always, I often find myself working on various devices (MacBook, Dell laptop, desktop) and lately I’ve been working on a heavily database oriented project. Local dev with databases involved isn’t very friendly across multiple machines. You can add your code to VCS but including a database full of test data isn’t my idea of a fun time.

So, long story short, I re-purposed my linux desktop pc from its semi-neglected state to something more useful. A LAN MySql server. Of course, there’s the whole LAMP stack on there just for shigs, though. That, and ease of data management with PMA.

So what is the problem?

Well, I’m lazy. I’m a developer. I’m not going to go to where my server is located and turn it on every time I need to access it, no sir! Nor am I going to leave it on permanently, leeching my hard earned electricity from under my nose. That coupled with the fact that it’s next to my bed and when night sets in, it sounds like an engine next to my face. So, I need to be able to turn it on from my laptops. Easy stuff, wake-on-lan at your service.

The BIOS supports it, awesome, why not. So, I set the bios to listen for those magic packets and locate a tutorial to enable Ubuntu to cooperate. If you’re unaware, generally speaking, when an Operating System shuts down it also powers down the network adapter. This is bad for Wake-on-lan. If it’s not on, it’s not listening! The problem is, all the tutorials I’m finding are for Ubuntu pre-18, at which point Ubuntu dropped Upstart support.

The solution

The steps for preparing Ubuntu for WoL are as follows:

  1. Install ethtool with: sudo apt-get install ethtool
    – Ubuntu Server 18.04 LTS already has this installed
  2. Run: ifconfig
    to get the name of your network interface. Its worth noting here that all the guides I found say that “your network interface is most likely eth0”. The thing is, that’s no longer the case. Ubuntu has been transitioning to systemd since version 15.04 and part of that transition is the implementation of Predictable Network Interface Naming and so you might just find that your network interface name is something along the lines of:

    enp0s15
    Regardless, run:

    ifconfig
    to get your network interface name: 
  3. Run the command sudo ethtool -s interface wol g
    and this tells ethtool to tell the network card to listen out for magic packets. The g denotes that it should listen for magic packets. The problem is, this solution isn’t persistent between shutdowns so once the machine is powered down and then booted up again, the setting is lost. The general consensus is that you should create a system service that runs this command at start-up.
  4. On Ubuntu 18.04, you need to create a systemd service as opposed to enabling, creating and/or modifying rc.local as you would’ve done on previous versions. So, navigate to /etc/systemd/system
    and create a new file here called `wol.service` – you could be more descriptive but I prefer short filenames, I know what wol means here. I use vim for all my terminal based editing so I run this command:

    sudo vim wol.service
    to create and begin editing my service file.
  5. Now you need to populate your wol.service file with all it needs to run as a service. The most comprehensive documentation I found on this was provided by RedHat linux here. My file looks like this:
    [Unit]
    Description=Configure Wake-up on LAN
    
    [Service]
    Type=oneshot
    ExecStart=/sbin/ethtool -s enp35s0 wol g
    
    [Install]
    WantedBy=basic.target
    

    ExecStart provides the command to run. It’s important to note that this must be the absolute path to ethtool because services don’d do relative paths. Check the documentation if you want to understand the file structure more thoroughly.

  6. Once you’ve created your file, you need to add it to the systemd services so you should run systemctl daemon-reload to tell the system to update and/or locate new service files, like the one we just created.
  7. Run systemctl enable wol.service to enable the service to run on start up and,
  8. finally, systemctl start wol.service to fire up the service. This may be a redundant command but I’m not sure if step 7 does this automatically or not so there’s no harm in running it anyway.

And there we have it, if you’ve gone through all of that and enabled Wake-on-lan in BIOS, you should be able to power off your machine and then wake it up with a magic packet.


The Magic Packet

I opted to use a python script to send my magic packet, provided by San Bergmans, thank you! I had to modify it ever so slightly, maybe because I’m using it on a Mac, it expected a mac address as a parameter and it always took its own filename as the first parameter, which is obviously not a mac address, regardless of whether or not I actually provided it a parameter. I actually wen’t further than that and just dumped my mac address in file, skipped the parameter requirement and now it just sends a magic packet with my pre-defined mac address whenever I run the script. I literally type:

wol.py
in my mac terminal and the magic packet is sent, the server fires up and I’m good to go! (I have the script in a directory added to path, some advise against such a practice but convenience is what I thirst for!)

Xcode, ffmpeg and mac

So for the last 2 hours I’ve been getting errors trying to compile some C++ code on my mac that makes use of ffmpeg. For clarity, I’m using a modern MBP, Xcode and C++ to try and compile some simple ffmpeg code to spit out a list of stuff I can’t even remember the reason for right now.

...Undefined symbols for architecture x86_64: "avcodec_register_all()...

This error was telling me that the linker can’t find a particular library required for compiling. So I tried rebuilding ffmpeg a few times with different options, I tried linking the include and lib directories to Xcode a couple of different ways, I tried changing the code several times too, I even tried compiling via terminal with g++, all with no luck.

So, tired as hell, I give it one more try. I realise that, thanks to some crazy Chinese forum with snippets of English on it, ffmpeg uses C, not C++. So I had to change my header includes to reflect that.

#ifdef __cplusplus extern "C" { #include "libavformat/avformat.h" } #endif
instead of:

#include <libavformat/avformat.h>

An unbelievably simple fix to a nightmare-ish problem, such is the life of a programmer.

So, I don’t write a lot of stuff here anymore but decided to put this up just in case someone else finds themselves in the same situation in the distant future.

Timing Code Execution in Java

Here’s a method for calculating the nth Fibonacci number, written in Java.

static int fibN(int n) { if(n <= 1) { return n; } else { return fibN(n-1) + fibN(n-2); } }

This method is great for demonstrating recursion but it’s terribly inefficient for the task at hand. A more efficient method of computing the nth Fibonacci sequence would be to literally count from the first to the nth sequence number. To demonstrate this, I’ll be timing the code execution of both the method above, fibN(), and the method below, fibN2().

static int fibN2(int n) { int current = 1; int previous = 0; int next = 1; int temp = 0; for(int i = 1; i < n; i++ ) { temp = current; next = previous+current; current = next; previous = temp; } return next; }
I’ll be using Java’s

System.nanoTime() to time this code. Each method will be called from within a loop 45 times, the nanoTime() being noted before the loop starts and when the loop ends in order to calculate execution time. Remember, end time minus start time equals duration. So, here’s the full class for this process:

class Fib { static long startTime = 0; static long endTime = 0; static long duration = 0; static long duration2 = 0; public static void main(String[] args) { System.out.println("Recursive: "); startTime = System.nanoTime(); for(int i=1; i <= 45; i++) System.out.println(fibN(i)); endTime = System.nanoTime(); duration = (endTime - startTime); System.out.println("execution took " + duration + " nanoseconds"); System.out.println("Non-Recursive: "); startTime = System.nanoTime(); for(int i=1; i <= 45; i++) System.out.println(fibN2(i)); endTime = System.nanoTime(); duration2 = (endTime - startTime); System.out.println("execution took " + duration2 + " nanoseconds"); System.out.println(duration + " vs \n" + duration2); } static int fibN(int n) { if(n <= 1) { return n; } else { return fibN(n-1) + fibN(n-2); } } static int fibN2(int n) { int current = 1; int previous = 0; int next = 1; int temp = 0; for(int i = 1; i < n; i++ ) { temp = current; next = previous+current; current = next; previous = temp; } return next; } }
timing

This image shows the result of this code. As you can see, fibN2() is considerably faster than fibN(), thus proving that it’s much more efficient to count up to the nth Fib number than it is to use recursion to find the nth number.

A final note, nanoseconds are pretty rubbish to work with to be honest. At least, in this example. So change

System.nanoTime()
to

System.currentTimeMillis()

instead to time the processes in milliseconds. This will give a more friendly result.

Merging Shift and Vernam Cipher

This is the outcome of my attempt of merging the Shift and Vernam ciphers as part of the Computer Security module of my B.Sc. Computer Science degree, third year. The task itself is designed to stimulate an interest in a wide range of security topics. One of the following tasks were to be chosen:

1. Design and implement a secure password system

2. Analyse an existing password program

3. Improve the security level of an existing password system

The task was primarily aimed a group collaboration, teams consisting of 3 people only. Alice, Bob and Charlie (A,B and C).

Person A would be responsible for the planning, schedule and preliminary research for the password system.

Person B would be responsible for the implementation of the password system.

Person C would be responsible for the security quality of the password system.

The quotes above are taken directly from the coursework specification and it appears that the responsibilities of each person don’t make much sense in regards to tasks 2 and 3.

 

I opted to do this task solo because, well, it was permitted for starters and although group work is good for building teamwork skills, its not really fair for an official grade to be given to one person based on somebody else’s, positive or negative.

So the idea behind the system I attempted to create was to merge the functionality of the Shift and vernam ciphers. By merging them, I mean performing one and then the other consecutively. The data to be encrypted was a password as plaintext which would be part of a user account administration software for a local area network, totally fictional of course. The report discusses what I was attempting to achieve, when I noticed that things weren’t going according to plan and why and how I completed the overall system with some last minute changes due to the problem discovered.

Anyway, the report is attached – I just wanted to share it with the world lol 😀

 

Merging Shift and Vernam Cipher