Make RPC non-blocking?

I’m using the RPC library on a robot to make I2C calls to an OpenMV for the purpose of finding AprilTags. I’m using a Pololu A-star 32U4 (same as a Leonardo) as the master (though I’ll likely switch to a SAMD chip in the near future). It was super easy to set it up to make calls to the camera and then use that to follow an AprilTag around. Nice work!

My issue is with the way that the RPC library works. The calls to rpc_master::__get_result() are blocking: the function keeps checking for results and only returns after the camera responds, which blocks for a little more than 100ms with each call.

I would like to make the calls non-blocking: command the camera to search for tags and then make periodic “check-ins” with the camera to see if there are results, but allow the uC to do other things while it’s waiting. Looking at rpc_master::__get_result(), it looks like it would be pretty straightforward to build in a state machine that breaks the calls in that function into a couple of non-blocking functions. Before I dig into that rabbit hole, though, a few questions:

  • Am I missing something about the library – does it have this functionality already?
  • Anyone have any pointers? Anything that jumps out as things to watch out for?

Thanks for any help you can provide.


Thanks for asking about this before doing it. Yes, you’re missing the streaming method. I.e. command the OpenMV Cam to stream out AprilTag results and then receive those results via a stream on the master.

It’s not quite non-blocking. But, it kinda does what you want. Basically, you can put all your code logic in the callback and then only return from the callback when you expect a result to be ready. This allows you to use your serial buffer then to accumulate results.

If you want things to be non-blocking then I would recommend to make a new streaming method that’s non-blocking. I’ll be happy to accept a PR for that.

You just need to make a version of this:

If you look at a code you can see it will wait 1 second to get each value package.

Another alterative is to make a non-blocking interface where instead of waiting for data in a loop you just execute some callback while waiting.

There are lots of ways to do non-blocking.

Thanks for getting back to me. I have to admit, I’m a little lost, but I have some time today to dig into things a little more.

First of all, in the Stream Mode documentation, it says this,

Please see the Arduino Stream Master and Arduino Stream Slave sketches for how to use the RPC library in stream mode. Note that we do not supply examples for how to use the RPC library with the OpenMV Cam in stream mode as the OpenMV Cam will trivially overrun the data buffers on all but the most advanced Arduinos.

The streaming examples that I see stream from a camera to a “computer”, so it’s unclear that streaming mode is viable here. I’m only streaming a few bytes (AprilTag data), so I’m guessing that it’s not really a problem. Can you clarify? Is there a camera (as secondary) to uC (as master) steaming example to start with?

More fundamentally, I don’t understand all of the synchronization business. In my mind, I2C transactions should act more like, say, an IMU:

The uC (as master) makes a write call to the camera (as a secondary) to tell it to start searching for AprilTags (this isn’t even strictly necessary, since that’s all the camera is doing)
The uC, “at it’s leisure”, makes read call to the camera to ask for the latest data
The camera returns the number of tags, followed by tag data
Upon getting a 0, the uC drops the connection
Upon getting 1 or more tags, the uC reads them

All of this is dependent on an I2C class on the camera that properly responds to I2C calls. In the Arduino framework, for example, to make an “Arduino” a secondary device, you give it an address and define requestEvent(). An interrupt on the Arduino is used to call that function in response to queries. That doesn’t seem to be an option with the OpenMV. In fact, putting an oscilloscope on the I2C bus, it appears that most of the calls get NACKed, indicating that the camera is not even picking up the line when requested. This is because the I2C bus is torn down after each call to get/put_bytes.

Somewhere, I take it, things get synchronized, so that the camera is listening for requests from time-to-time, but I don’t understand at the low-level how I2C calls are being serviced. Everything seems to boil down to I2C.send on the camera side, but I can’t seem to find where I2C.send is defined in pyb to understand what is going on.

Ultimately, I could get the behaviour I want by just writing to a UART. For pedagogical reasons, however, I wanted to explore using I2C. With the UART, it was fairly trivial to just set the camera to performing AprilTag searches and then spit them out over the UART when found (outside of the RPC library). The uC just listens and accumulates data in a buffer until it can process it. I could, in principle, do something similar with I2C by making the camera the master, but then I have extra headaches of a multi-master system, since the uC also needs to talk to the IMU and a controller elsewhere. There is the I2C example in the OpenMV library, but it is peppered with warnings about losing connections. The example appears to have the functionality I’m looking for, as well, but it, too, will be prone to errors because the bus only appears to be able to send in limited windows.

You can stream to an Arduino. I just don’t have an example with the OpenMV Cam as to not encourage people to do it.

Regarding the bus. So, the pyb module doesn’t receive data over I2C when the OpenMV Cam firmware is not activity waiting for a command. E.g. when it starts doing something it will not respond at all to any I2C requests. There’s no buffering or handshakes. The camera is just basically not on the I2C bus until the OpenMV code calls receive using the pyb module.

I designed the RPC library around this issue… The point of the retry system is to sync up with the camera. SPI slave mode has the same issue. If the camera is not in the method running the receive command then it cannot receive data.

Mmm, so, if you want to do this over I2C. I would just pass a very low very timeout into the call method. The call method accepts a timeout that defaults to 1 second. If you put that down in the 1-10 millisecond range then you can effectively poll the camera. Whenever the camera is ready to receive the next command it will respond with a result.

Then, on the OpenMV Cam side of things. Run the AprilTag code per RPC loop. There’s a callback in the RPC library that runs on the end of a loop. Then, adjust the RPC timeouts and make it not wait forever for the master to connect before giving up and doing another RPC loop.

The RPC loop can then execute as normal returning results.

Keep in mind that lowering the timeout on the OpenMV Cam RPC side may result in things never syncing.

Because AprilTags takes so long to run… if you are running that continuously then you may never have time to sync. An external I/O pin that commands the cameras to do an operation may be what you need to make this easier.

Finally, you can use micropython.schedule() to run AprilTags between normal micropython bytecodes executing. However, again, because AprilTags takes so long to run this will result in high delays between byte codes.

I think we kinda need to support threads on the OpenMV Cam do to what you want.

MicroPython just added support for co-routines. So, this may be coming sooner than you think. True threading was hard to get running given the GIL and GC issues when using threads in MicroPython.

I like the pin idea (indeed, I was thinking about that, but needed some clarity for how to do it). Adds some reliability to the process.

I don’t care about processing time on the camera side – it only has one job: detect AprilTags (well, and report them). So I think the process would be to just bring a pin high at the end of the loop and give the uC 10 - 20 ms to fetch the data; otherwise go back to searching for tags. I think I can figure out how to do that (I’m not really a python person, so that’s hindering me a bit).

Incidentally, I tried the april_tag_as_pixy_cam example, as it has (kind of) the functionality I’m looking for. It’s not particularly reliable (which is why you made the RPC library, I assume), but it kind of works. It suffers from a fatal flaw, however, which is that the camera holds SCL low between calls from the uC, like clock stretching run amok. (The RPC library does that, too – when it starts sync’ing it can hold the clock low for 5 - 10 ms.)

OK, thanks for the pointers. I really like what you’ve done.

Clock stretching is done by the STM32 hardware. It’s unfortunately not really fixable. In our C code we actually catch when the hardware is doing this and then force a reset of the STM32 I2C bus hardware. See here: micropython/pyb_i2c.c at aa839707bd67ed3a445466b79c13071c5b1dcdbd · openmv/micropython · GitHub

Anyway, I think what you want to do with the IRQ line makes sense. This is like how most things are done. You can have a party line which all cameras can pull down or one line per camera. The line is pulled high by default. When the line is low you know that a camera has a result ready. Then in an interrupt or etc. you can grab the result via RPC.