summaryrefslogtreecommitdiff
path: root/decoders/signature
diff options
context:
space:
mode:
authorSoeren Apel <soeren@apelpie.net>2020-03-01 16:50:46 +0100
committerSoeren Apel <soeren@apelpie.net>2021-02-13 22:40:15 +0100
commit460e6cfa5d45c8d2ba0f5be1235ab31d4ec9508b (patch)
tree26f8a397f0eae6c6248766213883e1b696f1a32a /decoders/signature
parentf22b1c7447e29573ca60a5a3c6d5b5dcb41282fe (diff)
downloadlibsigrokdecode-460e6cfa5d45c8d2ba0f5be1235ab31d4ec9508b.tar.gz
libsigrokdecode-460e6cfa5d45c8d2ba0f5be1235ab31d4ec9508b.zip
Remove samplerate from srd_decoder_logic_output_channel
This means that the samplerate for logic output channels is implicitly determined by the input channel samplerate. The motivation for this is that hard-coding a samplerate isn't possible - but at the same time, it's also not possible to determine the samplerate at the time the logic output channels are initialized, as the samplerate can be set at runtime. From my point of view, we would need one of two mechanisms to make this work: 1) Allow creation of logic outputs at runtime via some registration callback or 2) Allow changing a logic output's samplerate after it has been created, again requiring some kind of callback To me, both currently are overkill because making the assumption that samplerate_in = samplerate_out not only makes this problem go away as it can easily be handled on the client side where samplerate_in is already known, it also makes handling of the logic data in the PDs easier.
Diffstat (limited to 'decoders/signature')
0 files changed, 0 insertions, 0 deletions