- #ROBOTC REMOTE CONTROL CODE VEX UPDATE#
- #ROBOTC REMOTE CONTROL CODE VEX FULL#
- #ROBOTC REMOTE CONTROL CODE VEX CODE#
Once you have the list (named pic) you can just trigger them with qemu_irq_pulse. Next we need the irq list, which is returned by stm32_init and can just be passed to vex_mgr_create in vex_cortex_init This schedules the timer 10ms in the future uint64_t curr_time = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) This creates a nanosecond timer with vex_mgr_stream_circular_timer as a callback when it gets triggered. Timers are simple: s->circular_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, (QEMUTimerCB *)vex_mgr_stream_circular_timer, s) Sadly the qemu internals aren't too well documented so I had to take reference from some other hardware implementaitons (dma, adc, etc.)
#ROBOTC REMOTE CONTROL CODE VEX UPDATE#
#ROBOTC REMOTE CONTROL CODE VEX CODE#
I need SPI for the supervisor or user code won't even run, I2C is needed to simulate IMEs, and the ADC is needed to simulate analog sensors. This is great but unfortunately I2C and SPI aren't implemented and the ADC is incomplete (lacks continuous conversion mode). This fork includes enough to get PROS running out of the box and even print messages to UART without any modifications to the kernel. Turns out there is a qemu fork that adds stm32 support for the stm32f103xx SoCs here. Support for stm32 specifically is extremely important because timers, interrupt control, dma, and most other mmio functionality is vendor specific. QEMU itself supports Cortex-M3 but only a very limited number of SoCs and development boards not including anything stm32. The user SoC is a STM32F103VD with 384K flash, 64k ram, and a Cortex-M3 armv7-m processor.
#ROBOTC REMOTE CONTROL CODE VEX FULL#
Ideally we do full emulation including both the supervisor and user SoCs so that we can execute the exact binaries that are used on a real robot, I quickly realized how difficult that would be as the supervisor's pinout is not documented and the protocol it uses to communicate with VEXNet keys is not documented. I aimed to solve this issue by implementing a way of simulating your robot hardware seamlessly but there a few technical challenges to overcome before that is possible, to explain these challenges first lets look at a block diagram of the cortex: What compounds issues even more is that the Cortex does not have any officially supported debugging functionality, we do not have a JTAG port to attach a debugger when things go wrong. Unfortunately RVW only works for RobotC code and you only have a handful of pre-built virtual robots to choose from making it useless for most teams. So far has been only one way to simulate your robot which is RVW (Robot Virtual Worlds). Sometimes you may not even have physical access to a robot which makes the entire process even slower. In order to develop your robot code you have to plug into the robot or controller, upload, wait several seconds, reset the field, test your code, and repeat. If you have ever competed in VEX you know how painful programming the cortex is.