This fixes some edge cases the previous version didn't handle properly
by simplifying the logic of determining directly driven wires and
representatives to use as buffer inputs.
Was previously the number of proposed renames, but since renames can be skipped this causes the final count to differ from the number of actually renamed objects.
Check counts in `tests/various/autoname.ys`.
* Fix building and running unit tests
* Enable unit tests
* Add gtest always
* test-sanitizers.yml: Use makefile.conf
* proper test setup
* make it run on macOS
* Run libyosys build only for unit tests after testing is done
* Disable LTO on public CI
---------
Co-authored-by: Krystine Sherwin <93062060+KrystalDelusion@users.noreply.github.com>
This is a complete rewrite of the RTLIL-kernel-side bufnorm code. This
is done to support inout ports and undirected connections as well as to
allow removal of cells while in bufnorm mode.
This doesn't yet update the (experimental) `bufnorm` pass, so to
manually test the new kernel functionality, it is important to only use
`bufnorm -update` and `bufnorm -reset` which rely entirely on the kernel
functionality. Other modes of the `bufnorm` pass may still fail in the
presence of inout ports or undirected connections.
* When used with -tempinduct mode, -seq <N> causes assertions to be
ignored in the first N steps. While this has uses for reset modelling,
for these test cases it is unnecessary and could lead to failures
slipping through uncaught
The simple XOR `commutative_eat()` implementation produces a lot of collisions.
https://www.preprints.org/manuscript/201710.0192/v1/download is a useful reference on this topic.
Running the included `hashTest.cc` without the hashlib changes, I get 49,580,349 collisions.
The 49,995,000 (i,j) pairs (0 <= i < 10000, i < j < 10000) hash into only 414,651 unique hash values.
We get simple collisions like (0,1) colliding with (2,3).
With the hashlib changes, we get only 707,099 collisions and 49,287,901 unique hash values.
Much better! The `commutative_hash` implementation corresponds to `Sum(4)` in the paper
mentioned above.