ijor wrote:HBL interrupt is generated by GLUE. There is jitter at the number of cycles that this interrupt produces. This jitter is caused by the CPU itself and related, indeed, to the internally generated clock E (the actual technical reason is that GLUE interrupts are autovector). The jitter however is not random. It is a fully predictable pattern (based on the phasing of the E clock). Paulo and me prooved how to sync to HBL a few years ago.
simonsunnyboy wrote:I was told in the past not to use HBL and use Timer B instead. Is the HBL even more unstable in comparison, e.q. even less cycles to use? (See the other thread)
mc6809e wrote:I'm guessing to get pixel accurate timing you had to use a STOP instruction and an HBL interrupt handler padded with instructions based on the current scanline ...
Dio wrote:So the E sync still applies if the reason for asserting VPA is for autovector... that's a properly badly documented feature of the 68000.
ijor wrote:It is explicitely documented. I don't know why you think it is badly documented. You might have been looking at the wrong manual(s).
Dio wrote:While it does say 'a normal M6800 read cycle' it doesn't actually mention that the E-clock wait states are involved, and it has no mention of the different behaviour in the documentation on interrupts, and it's equally poor to omit to mention it in the section on interrupt timing. It's hardly a huge omission, and sure, by the standards of technical documentation it's a rounding error, but it's still an omission. If you had a system that never used VPA anywhere but for autovectors, you might feel with some reason quite cheesed off that a timing feature like this wasn't called out elsewhere.
Dio wrote:... but nobody except yourself ever found it ...
If you had a system that never used VPA anywhere but for autovectors, you might feel with some reason quite cheesed off that a timing feature like this wasn't called out elsewhere.
ijor wrote:reason, there wasn't too much communication between hardware and software guys (especially not with demo coders) in the ST scene.
Dio wrote:Yeah, absolutely. I think the ST wasn't an enormously attractive machine to hardware hackers, probably due to the lack of a proper expansion bus; conversely, it was extremely popular with software hackers, since it's almost all about the software.
mc6809e wrote:Consider how long it's taken to get a PC to run a CPU in parallel with hardware graphics acceleration. It took years for Windows to get to a point where a programmer could launch a hardware accelerated graphics function that wouldn't block on the call. The OS wasn't designed for that concurrency. And as a result, many graphics accelerators weren't built with a way to asynchronously inform the CPU that an operation had completed. Precious CPU cycles were wasted running a simple loop that checked to see if the hardware had finished. No OS support for asynchronous operation meant no hardware support. And no hardware support meant that there was no need to alter the software. A Catch-22. Just drawing boxes in a window on a single CPU machine running WinXP can make the machine unresponsive. I've done it. The only way out was to create a condition where the thread would yield after drawing a small number of boxes.
And WebGL still can make a WinXP machine freeze because of a lack of appreciation for concurrency. Vista finally fixed that.
Dio wrote:I write graphics drivers for a living, so I know a bit about this . I think you miss the target a bit looking at notifications and the OS; while the Vista driver model is mostly simpler for us to work within (no more kernel debuggers for most of the code, a lot fewer bluescreens and, as you say, a lot less time in kernel mode which generally makes the machine more responsive). But the philosophy before Vista was that the video driver was just Another Device Driver. Moving it up across the ring boundary hasn't all been gravy mind, it creates a different set of problems.
Dio wrote:Now a rising fundamental problem is that you can't preempt the GPU very well. It's possible (indeed, trivial now with arbitrary sized programs) to run a single command on the hardware that can take a very long time to complete, but really it would be nice to be able to preempt it at about the ms level. New hardware is working towards that...
mc6809e wrote:My point was that the OS's philosophy of CALL function, wait, CALL function, wait, to some extent dictated the hardware. How long did it take for graphics hardware to provide even an interrupt for something like VBlank? Polling 3DA wasted cycles. A work-around would be to use the timer but you had to poll a little bit anyway in case the frequency was a little off.
Dio wrote:mc6809e wrote:My point was that the OS's philosophy of CALL function, wait, CALL function, wait, to some extent dictated the hardware. How long did it take for graphics hardware to provide even an interrupt for something like VBlank? Polling 3DA wasted cycles. A work-around would be to use the timer but you had to poll a little bit anyway in case the frequency was a little off.
Certainly by 1997, although I suspect most chips had the capability a lot earlier. The big enabler of concurrency for 3D was off-chip FIFOs, quickly replaced by command buffer DMA - these developed in the 1996-99 timeframe.
mc6809e wrote:Didn't command buffers come in with directx 5? The first release that would run on a Win95 machine was in 1998. Command buffer DMA took much longer to arrive, I think. There are still cards today that use the CPU to maintain the queue of commands. That's better than polling to be sure, but the ability to have a separate graphics op sequencer has been around since at least 1985 (the Amiga's copper could run a sequence of blitter ops).
Oh, well. I know I sound like an old man telling the kids to get off my lawn. I suppose I'm just a little jaded when I see the mess that's Windows.
Dio wrote:Trust me, DirectX is about ten times better than OpenGL. I started out as a GL driver man, but watched with increasing horror as it turned into a complicated mess of interacting extensions, and was extremely happy when they moved DirectX to real-mode so I could jump ship without having to work in kernel-mode all the time. It's not perfect - notably, it's very short on throughput - but the Windows driver stack is tolerably well designed.
mlynn1974 wrote:I was told that, in simple terms, this is because the code is doing a lot of DIVS and MULS which take in the order of 160 clock cycles. The interrupt can't be triggered during the execution of the instruction (only at an instruction boundary) so it misses the start of the scanline a bit hence the jitter. I take it it would take a lot more effort to get the rasters really stable when the demo has been programmed like that?
mlynn1974 wrote:I take it it would take a lot more effort to get the rasters really stable when the demo has been programmed like that
Users browsing this forum: No registered users and 3 guests