I’m really excited to introduce Userscripts.

What if you could use any Machine Learning model to improve training data? For free? Introducing Userscripts.

What is it?

Userscripts are JavaScript (JS). The high level idea is that we can build custom functions inside Diffgram. Here are some examples of what you can achieve:

Want to see the code?

Get the full code

What’s the big deal?

Free, Freaky Powerful, and Fast!!

This is a real game changing feature because now it’s free to run these models as often as need. (Becuase it runs on the local computer).

It’s powerful because it’s actually code. You can run any of the latest models on NPM. Your own. Combo them. etc. All in the browser — instantly.

And it’s fast. No waiting for remote server calls. It’s super fast to runs these models locally. And fast to program and get feedback — instantly compiles and instantly test on any of the existing media.

When does the script run?

The idea is that it’s event driven. For example, a user creating a new instance, changing files, etc.

Each interaction event includes the event information, such as the new instance created. Your script will then do something, such as running a machine learning model, modifying the UI, calling an API, or anything you want.

For example, (in the script) you can choose to create instances. Imagine you choose to run a segmentation model. You may then use opencv to process the segmentation into polygon points, and pass that to Diffgram to have user interactable instances.

Built-in Functions

To get started we are offering some basic built in functions, like creating boxes, getting the current canvas and showing user messages.

Dependencies (NPM and more)

You can load any models you need!

What direction is this going in?

Our direction is to make that development and execution process the best possible. More module code. More event hooks. More built-in functions. Faster runtime performance.

Overall, to make it really easy to create and share powerful scripts.

This is probably not what you think

Oh another speed up approach right? Well not really.

See the thing is — just running a model that’s already good at detecting something doesn’t really mean much.

Instead think of it more like:

What information can the user add, that in an interactive sense, can make an existing algorithm work? Region of interest is the go to example of constraining the problem here — but really any user interaction — including clicking text attributes is fair game.

Another example: Use something like bodypix (or literally bodypix) to pre-label people. Then the real supervision work is actually to say “Is this person on the phone?”. The model draws the spatial location (what it already knows) and we supervise the new information (on phone).

How is this different then other methods?

Scale.

If I had to pick one word — then that’s it — Scale. This is more of a mini-paradigm then a single approach. This is a new way of thinking about user interactions with training data.

You can build anything you want. Quickly. And deploy it to annotators instantly.

This isn’t a “one trick pony demo video looked cool”. This is real. This is combinable with other methods like pre-label. Userscripts are composable and customizable.

My hope is that this really changes the conversation and moves us towards a better understanding of what methods really work in what contexts to improve annotation.

How do I get it?

Sign up for a diffgram account.

We have a even MORE HUGE (well at least for us) announcement coming soon! Stay tuned! :)

Best,

Anthony