Author: Michiel van der Velde

Talking to Eve

Talking to Eve

I have been playing Eve Online on and off since 2007. One of the features I liked most was the ability to interact with the world of Eve Online through their API. In the beginning, working with their API was painful for me, as documentation was lacking. More recently they switched over to a more modern API, the Eve Swagger Interface (ESI). Using ESI an application or web site can interact with Eve Online in various ways.

ESI allowed me to build Ageira Trade, a web site where characters can buy and sell in-game resources from and to me. The web site uses ESI to fetch in-game contracts and compare them against quotes submitted through the web site. The matching process happens automatically, which allows me to see which contracts are valid and accept them without having to manually check each one. I will write a blog post about Ageira Trade in the near future.

Node.js

A while ago I developed a small node,js module for working with Eve Online Single-Sign On (SSO). By using SSO developers can authenticate Eve Online characters and communicate with the API on their behalf. Eve uses scopes to implement this; scopes are requested by the developer, the user can decide whether or not to grant the requested scopes.

My first module, which handles SSO exclusively, is eve-sso. Using eve-sso developers can easily use SSO to authenticate characters and request scopes. I am using this module myself in production over at Ageira Trade. The module takes away the hassles of dealing with redirect URLs and grant requests. It supports access tokens and refresh tokens.

I built eve-esi on top of eve-sso. The module is a more complete module which includes account and character management, and aims to make working with ESI less painful. However, at the time of writing, the module can use an overhaul, which I intend to do in the near future. I made a wrong assumption when first building it, resulting in each character having a separate account without being able to link characters together. I intent to rectify this in the next major release.

Browser

Recently I started doing front-end development again, as you can read here. I needed a way to access the Eve Swagger Interface (ESI) from the browser. Luckily, I only needed to access endpoints which do not require authentication. This greatly simplified the task of writing a small module for this purpose.

The result is esi-browser, a simple browser module which caches results in LocalStorage and respects ESI’s Expires and ETag headers. I am using it as part of Agera Trade. The small module allows developers to easily access, for example, info about in-game items. This negates the need to load the information on the server instead.

Next steps

My Ageira Trade project is nearly complete; some functionality in the administration panel is still painfully absent, but the customer-facing portion, as well as the back-end, work like a charm.

Be sure to check out the modules if you’re interested in developing against ESI. The source code for these modules, as well as Ageira Trade, is available on GitHub under the MIT license. Happy coding!

My first full-stack deployment in a decade

My first full-stack deployment in a decade

It has been over a decade since I last developed a full-stack application. Back then I wrote bad PHP and absolutely murdered the server with grotesque SQL queries. A lot has changed in the intervening decade, including which skills are required for full-stack development.

Today I launched Ageira Trade, a website where fellow Eve Online players can sell me their ore or buy minerals. The website makes use of the API of Eve Online to authenticate characters and read in-game contracts. I started with the API and made my way to the front-end, an area I have little experience with. The result is a React app which appears to be performant, although I am sure there are many optimizations to make. Mostly the front-end was a learning process for me, learning skills I intent to use in the future.

Ageira Trade offered me the chance to experience the full development cycle, including Ubuntu administration. I have learned valuable lessons and gained new experience with front-end development. It has definitely been worth the (metaphorical) headaches.

Thanks for reading, hope to see you soon!

WebRTC Signaling with Signal-Fire

WebRTC Signaling with Signal-Fire

WebRTC is a technology which allows individual peers to talk directly to each other. This requires a signaling server.

A WebRTC signaling server communicates between peers to set up peer-to-peer audio/video and/or data channels. This allows your clients to communicate directly with each other.

Years ago I developed signal-fire, a WebRTC signaling server built for node.js. There also was a browser client available which greatly reduced the burden of setting up peer connections. Lack of maintenance led to the module’s eventual demise and I recently officially retired it.

Luckily I had some inspiration for the new and improved version, and I got to work. The result was the Signal-Fire ecosystem, starting with Signal-Fire Server, a server that does exactly the same as its predecessor did, but better!

The Server

Signal-Fire Server is based on my other quite recent module Luce. Luce is a versatile WebSocket framework for node.js. An excellent pairing for my new project.

Command-Line Interface (CLI)

If you want to get started with Signal-Fire Server without too much hassle, and you’re content with the basic features (for now), you can use the CLI to start and manage Server workers.

Install the CLI globally:

> npm i -g @signal-fire/cli

To start a worker on port 3003:

> signal-fire start -p 3003

Starting the Server

The Server can be installed through npm:

> npm i @signal-fire/server

To manage client IDs the Server requires a registry. In the example below we use LocalRegistry, an in-memory store.

import { Server } from 'http'

import createApp from './index'
import { LocalRegistry } from '@lucets/registry'

const registry = new LocalRegistry()
const app = createApp(registry)
const server = new Server()

server.on('upgrade', app.onUpgrade())
server.listen(3003, () => {
  console.log('Server listening on port 3003')
})

Congratulations, you now have a basic server running!

The Client

Signal-Fire Client is the replacement of signal-fire-client, which has also been deprecated. The Client is also new and improved. The Client is designed for the browser and uses the native EventTarget.

The Client is meant to be used with browserify.

Install the client through npm:

> npm i @signal-fire/client

Connecting to the Server is exceedingly simple:

import connect from '@signal-fire/client'

const client = await connect('ws://localhost/socket')

Sessions

Sessions are requests and responses for setting up the peer connection. One peer creates a session, which its target can either accept or deny.

This example shows how to start a session:

import connect, { PeerConnection } from '@signal-fire/client'

async function run () {
  const client = await connect('ws://localhost:3003/socket')
  const session = await client.createSession('<target id>')

  session.addEventListener('accepted', (ev: CustomEvent<PeerConnection>) => {
    console.log('Session accepted!')

    const connection = ev.detail
    const stream = await navigator.mediaDevices.getUserMedia({
      video: true,
      audio: true
    })

    stream.getTracks().forEach(track => connection.addTrack(track, stream))
  })

  session.addEventListener('rejected', () => {
    console.log('Session rejected')
  })

  session.addEventListener('timed-out', () => {
    console.log('Session timed out')
  })
}

This example shows how to accept a session:

import connect, { IncomingSession } from '@signal-fire/client'

async function run () {
  const client = await connect('ws://localhost:3003/socket')

  client.addEventListener('session', (ev: CustomEvent<IncomingSession>) => {
    const session = ev.detail
    const connection = await session.accept()
  })
}

Next

The Signal-Fire Server and Client are projects I intent to keep maintaining, and using myself. If you’ve checked out either and found a bug, please open an issue on GitHub, or better yet, a pull request.

WebRTC signaling with Signal-Fire

WebRTC signaling with Signal-Fire

In 2016 I wrote signal-fire, a WebRTC signaling server built for node.js and client built for the browser. I had not maintained the module since then, which unsurprisingly resulted in the modules no longer working.

So recently I took it upon myself to start the projects from scratch. I wrote a capable and extensible WebRTC signaling server for node.js and accompanying client module for the browser. Together they form a strong first start towards using WebRTC in any framework.

Early versions are already available:

The Server

The Signal-Fire Server is the main component of the ecosystem. It provides a flexible Luce application. Luce is a versatile WebSocket framework which uses asynchronous hooks to extend functionality. I developed Luce as the spiritual successor to my now deprecated module Illustriws.

At its core the Server provides each client with a unique ID, which can then be used to process the signaling necessary to set up a WebRTC peer connection. The protocol is JSON-based and simple to work with. Methods of exchanging IDs falls outside the scope of the Server, although the versatility of Luce allows many possible strategies for creating and storing IDs.

The Command-Line Interface (CLI)

To make using Signal-Fire Server as easy as possible, I have developed a command-line interface (CLI). Using the interface one can start multiple app workers and manage their lifecycle. The interface is currently a work in progress, as are all Signal-Fire modules.

The Client

The Signal-Fire Client works in combination with the Server to provide an easy to use and (almost) complete WebRTC solution. The Client abstracts away the hassle of communicating with the Server, negotiating ICE candidates, and setting up peer connections and data channels.

The Client is designed to be used in the browser. The spec has somewhat stabilized since 2016, so it is my hope the new Signal-Fire modules will be a little more future-proof.

The Future

I would like to continue development of both the Luce and Signal-Fire ecosystems. Unfortunately I lack some basic skills, like unit testing and CI. I plan to rectify the situation and refactor where necessary to get reasonable test coverage.

I intent to develop a product which includes both ecosystems as a fundamental part of its architecture. This should help me get an idea of what is actually working and important, and focus development accordingly.

It’s my hope both ecosystems will see some usage. I have deprecated a couple of modules recently and resurrected some others (like Wormhole, my IPC module). The result has been Luce and Signal-Fire. I am curious to see if they will see any use.

Back From Beyond

Back From Beyond

This blog has not been updated in a long while. So I thought it would be time to do so. This is the grand reopening of the Art of Coding blog. Welcome, grab yourself a piece of cake, and enjoy. This post will contain a summary of what I’ve been working on recently.

I have been coding Luce, the spiritual successor to Illustriws and signal-fire (which I have both officially deprecated). Luce is a versatile WebSocket server framework built for node.js. Luce uses asynchronous hooks analogous to middleware functions in, for example, Koa. I have created the beginning of an ecosystem which I hope others can use as well.

I have also resurrected signal-fire, the WebRTC Signaling Server for node.js. I have rewritten the server and client from the ground up. Both are still works-in-progress. The WebRTC signaling server and client can be used to establish peer connections between individual peers, for the exchange of video and audio, or other data. Signal-Fire helps ease the pain of implementing them directly.

In order to track peers, I have made a Registry interface which others can extend to implement client registries with multiple back-ends. This way you can scale your messaging apps with east. Included is an in-memory registry, which can be used as a reference implementation. I have also made a Redis registry, which is currently a work-in-progress.

I have done some work with Redis Streams, and as such developed redis-streams-manager, a streams manager built around the EventEmitter interface. Stream entries are emitted by stream name.

My Inter-Process Communication (IPC) module Wormhole had aged a little, so I have rewritten it in TypeScript. Now it’s future-proof and ready to be used (again).

That’s it for today. Thanks for coming, hope to see you again!

Enter the Wormhole; IPC goodness

Enter the Wormhole; IPC goodness

This post and the example in it have been updated to match version 1.x.x.

Node’s child_process module allows you to spawn and fork new processes, optionally with a built-in Inter-Process Communication (IPC) channel. This is an easy way to communicate with the child process, and valuable on its own in many situations. But it’s not very developer-friendly.

Wait, scratch that, its very developer-friendly. It’s just not… easily and repeatably usable. You will need to define your own JSON-based protocol, which can become tedious if you need to do this often. Addition: it’s not really necessary to define your own protocol: primitive values can be sent too, albeit without metadata.

So, to stay in keeping with DRY and KISS I decided to make a little module that does most of the heavy lifting for me. Note that there are probably dozens of modules out there that do what I did and probably better, but that doesn’t take the fun out of building it. It’s simple and it works, so I’m sure there are applications.

I wanted a couple of things:

  • A way to notify the other end of the link of events that have happened
  • A way to call commands on the remote end and receive a result back (RPC-like behavior)

To satisfy these requirements I built wormhole. It’s designed to work with Node.JS’s child_process and process, and provides what I was looking for.

Installing it is super simple:

npm i @art-of-coding/wormhole --save

How do you use it, you ask? It’s fairly simple – just fork a child process that uses wormhole, and you can send events and call commands!

In the example below we fork a child process, and both processes use wormhole. This allows them to send events and call remote commands. All features are supported in either direction (it’s bidirectional).

The master process:

const childProcess = require('child_process')
const Wormhole = require('@art-of-coding/wormhole')

const child = childProcess.fork('./my-child.js')
const wormhole = new Wormhole(child)

// Register a `startup` event handler
wormhole.events.on('startup', () => {
  console.log('received startup event!')
})

// Register an `add` command
wormhole.define('add', function (a, b) {
  return a + b
})

// Send the `quit` event to the child
setTimeout(() => wormhole.event('quit'), 5000)

The child process:

const Wormhole = require('@art-of-coding/wormhole')

// Without the `channel` argument, `process` is selected by default
const wormhole = new Wormhole()

// Register a `quit` event handler
wormhole.events.once('quit', () => {
  process.exit(-1)
})

// Send an event
wormhole.event('startup')

// Call a remote command
wormhole.command('add', 5, 6).then(result => {
  console.log(`5 + 6 = ${result}`)
})

As you can see for yourself, using it could not be easier!

You can find wormhole on GitHub or npm.

A simple procedure caller

A simple procedure caller

Sometimes you just need something simple. Something that’s light-weight though capable, and does what you want – nothing more. That’s what I did with procedure-caller, a simple Node.JS module for calling, you guessed it, procedures. To be clear: in this context, a ‘procedure’ is nothing more than a function that can be called repeatedly.

Installing it using npm is child’s play:

npm i @art-of-coding/procedure-caller --save

Now that it’s installed we’ll dive right into an example:

const ProcedureCaller = require('@art-of-coding/procedure-caller')

// Create a new instance
const pc = new ProcedureCaller()

// Define a procedure named 'add', which adds two numbers
pc.define('add', function (a, b) {
  if (isNaN(a) || isNaN(b)) {
    throw new TypeError('arguments must be numbers')
  }

  return a + b
}

// Now call the procedure
const result = pc.call('add', 5, 6)

// Display the result
console.log(`5 + 6 = ${result}`)

As you can see, it’s easy to call a procedure and get the result. But what if we’re using asynchronous methods, like Promises or async/await? That’s covered too!

const ProcedureCaller = require('@art-of-coding/procedure-caller')
// We're using gh-got to talk to the GitHub API
const ghGot = require('gh-got')

const pc = new ProcedureCaller()

// Define an async procedure
pc.define('repo', async function (user, name) {
  const response = await ghGot(`repos/${user}/${name}`)
  return response.body
}

// Call the async procedure
pc.call('repo', 'Art-of-Coding', 'procedure-caller').then(result => {
  console.log(`Repo description: ${repo.description}`)
})

Like I said in the opening of this post, my goal was to make something that was exceedingly simple to use. I believe I have done so.

You can view the module on npm and GitHub.