Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prescientfuzz initial integration #1982

Open
wants to merge 23 commits into
base: master
Choose a base branch
from

Conversation

DanBlackwell
Copy link

Hi, I have a new fuzzer based on LibAFL that I would like to integrate. I'd like to be able to run an experiment to compare it with the other fuzzers, but the documented approach (adding to https://github.com/google/fuzzbench/blob/master/service/experiment-requests.yaml) doesn't seem to be used lately - is there some automatic experiment that runs periodically?

Copy link

google-cla bot commented May 9, 2024

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@DonggeLiu
Copy link
Contributor

Hi, I have a new fuzzer based on LibAFL that I would like to integrate. I'd like to be able to run an experiment to compare it with the other fuzzers, but the documented approach (adding to https://github.com/google/fuzzbench/blob/master/service/experiment-requests.yaml) doesn't seem to be used lately - is there some automatic experiment that runs periodically?

Thanks for submitting a PR, @DanBlackwell! This makes our work a lot easier : )
Here is a guide on how to enable PR experiments: #1967 (comment), hope that helps!

Once it is ready, we can use the /gcbrun commands to run experiments and show results on this PR directly, without having to wait for another day before experiment-requests.yaml triggers a new experiment.

@DanBlackwell
Copy link
Author

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-10_prescientfuzz_init --fuzzers libafl aflplusplus prescientfuzz honggfuzz libfuzzer

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-10-prescientfuzz_init --fuzzers prescientfuzz

@DanBlackwell
Copy link
Author

@DonggeLiu Has this failed to build? I can't see anything in that CI log

@DonggeLiu
Copy link
Contributor

Experiment 2024-05-14-prescientfuzz-init data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@DonggeLiu
Copy link
Contributor

DonggeLiu commented May 14, 2024

@DonggeLiu Has this failed to build? I can't see anything in that CI log

Yes, I failed to notice that the experiment name does not match this pattern: "^[a-z0-9-]{0,30}$".
Let me restart one named 2024-05-14-prescientfuzz-init now.
The data & report will be available in the links above later.

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-14-prescientfuzz-init --fuzzers prescientfuzz

@DanBlackwell
Copy link
Author

Ok, still some dying from memory starvation. I think I have it fixed now; any chance you could rerun that exact setup for me @DonggeLiu ?

Oh, Is there any caching in the docker setup? I've only updated the fuzzer source repo, so if docker caches the build images it probably won't fetch the updated version.

@DonggeLiu
Copy link
Contributor

Oh, Is there any caching in the docker setup? I've only updated the fuzzer source repo, so if docker caches the build images it probably won't fetch the updated version.

I vaguely recall that this has caused problems before.
Could you please modify the dockerfile just in case? Thanks!

I am happy to re-run the experiment when you are ready, please feel free to ping me.

@DanBlackwell
Copy link
Author

Ok, have manually specified the commit number which should trash the cache. All ready to go!

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-16-prescientfuzz-init --fuzzers prescientfuzz

@DonggeLiu
Copy link
Contributor

Experiment 2024-05-16-prescientfuzz-init data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@DanBlackwell
Copy link
Author

I forgot that it needs git fetch before checking out sorry... Any chance you can restart that @DonggeLiu ?

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-17-prescientfuzz-init --fuzzers prescientfuzz

@DonggeLiu
Copy link
Contributor

Experiment 2024-05-17-prescientfuzz-init data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@DanBlackwell
Copy link
Author

Hi @DonggeLiu ; any chance you can restart it? Just patched another bug sorry.

@DonggeLiu
Copy link
Contributor

Hi @DonggeLiu ; any chance you can restart it? Just patched another bug sorry.

Sure! I've terminated all instances of the previous experiment and approved the CIs.
Before we start another experiment, would you mind checking if there is any CI error?
I will start the experiment if they behave as expected : )

@DanBlackwell
Copy link
Author

The CI looks ok to me, and I ran one of the previously failing benchmarks through the debug-builder earlier. I'm hoping this run should have everything working finally; I appreciate your patience! (I'm trying to build a global CFG without the LTO pass - which has been tricky for me)

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-17-prescientfuzz-ini --fuzzers prescientfuzz

@DonggeLiu
Copy link
Contributor

DonggeLiu commented May 17, 2024

The experiment CI says failed, but the experiment instance and the data directory has been created, so I reckon we are safe.

Experiment 2024-05-17-prescientfuzz-ini data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@DanBlackwell
Copy link
Author

Hi @DonggeLiu , finally I have everything building and running; am I allowed to run say 5 instances to test different parameter setups? I'm thinking to add each setup as a different 'fuzzer' (in ./fuzzers) and then they can all run in one experiment. Do let me know if there's a better approach.

@DanBlackwell
Copy link
Author

Also, I wanted to generate that report just for PrescientFuzz vs LibAFL (as the graphs are hard to read with so many fuzzers); I tried doing the following but got an error:

(.venv) ➜  fuzzbench git:(prescientfuzz_initial_integration) ✗ PYTHONPATH=. python3 analysis/generate_report.py PrescientFuzz --report-dir PrescientFuzzReport --fuzzers prescientfuzz libafl --from-cached-data
INFO:root:Reading experiment data from PrescientFuzzReport/data.csv.gz.
/home/dan/Documents/fuzzbench/analysis/generate_report.py:139: DtypeWarning: Columns (1) have mixed types. Specify dtype option on import or set low_memory=False.
  experiment_df = pd.read_csv(data_path)
INFO:root:Done reading data from PrescientFuzzReport/data.csv.gz.
WARNING:root:Filtered out invalid benchmarks: set().
INFO:root:Rendering HTML report.
/home/dan/Documents/fuzzbench/analysis/plotting.py:485: OrangeDeprecationWarning: compute_CD is deprecated and will be removed in Orange 3.34.
  critical_difference = Orange.evaluation.compute_CD(
/home/dan/Documents/fuzzbench/analysis/plotting.py:488: OrangeDeprecationWarning: graph_ranks is deprecated and will be removed in Orange 3.34.
  Orange.evaluation.graph_ranks(average_ranks.values, average_ranks.index,
/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/jinja2/runtime.py:298: FutureWarning: this method is deprecated in favour of `Styler.to_html()`
  return __obj(*args, **kwargs)
Traceback (most recent call last):
  File "/home/dan/Documents/fuzzbench/analysis/generate_report.py", line 293, in <module>
    sys.exit(main())
  File "/home/dan/Documents/fuzzbench/analysis/generate_report.py", line 277, in main
    generate_report(experiment_names=args.experiments,
  File "/home/dan/Documents/fuzzbench/analysis/generate_report.py", line 261, in generate_report
    detailed_report = rendering.render_report(experiment_ctx, template,
  File "/home/dan/Documents/fuzzbench/analysis/rendering.py", line 46, in render_report
    return template.render(experiment=experiment_results,
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/jinja2/environment.py", line 1301, in render
    self.environment.handle_exception()
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/jinja2/environment.py", line 936, in handle_exception
    raise rewrite_traceback_stack(source=source)
  File "/home/dan/Documents/fuzzbench/analysis/report_templates/default.html", line 143, in top-level template code
    {{ experiment.relative_code_summary_table.render() }}
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/io/formats/style.py", line 344, in render
    return self._render_html(sparse_index, sparse_columns, **kwargs)
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/io/formats/style_render.py", line 162, in _render_html
    self._compute()
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/io/formats/style_render.py", line 205, in _compute
    r = func(self)(*args, **kwargs)
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/io/formats/style.py", line 1444, in _apply
    result = data.T.apply(func, axis=0, **kwargs).T  # see GH 42005
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/core/frame.py", line 8848, in apply
    return op.apply().__finalize__(self, method="apply")
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/core/apply.py", line 733, in apply
    return self.apply_standard()
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/core/apply.py", line 857, in apply_standard
    results, res_index = self.apply_series_generator()
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/core/apply.py", line 873, in apply_series_generator
    results[i] = self.f(v)
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/core/apply.py", line 138, in f
    return func(x, *args, **kwargs)
  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/io/formats/style.py", line 3560, in _background_gradient
    rgbas = plt.cm.get_cmap(cmap)(norm(gmap))
AttributeError: module 'matplotlib.cm' has no attribute 'get_cmap'

I've tried searching, but I'm a bit stumped as to how it's possible for this to happen; although I am not particular experienced with pip / python so maybe matplotlib is just not installed properly?

@DonggeLiu
Copy link
Contributor

am I allowed to run say 5 instances to test different parameter setups?

Yep sure, this requires changing this value to 5.

I'm thinking to add each setup as a different 'fuzzer' (in ./fuzzers) and then they can all run in one experiment. Do let me know if there's a better approach.

Yep this is the simplest way.
Unfortunately there is no better approach for now.

@DonggeLiu
Copy link
Contributor

AttributeError: module 'matplotlib.cm' has no attribute 'get_cmap'

I reckon this is likely due to a mismatch version of matplotlib, which does not have get_cmap.
I did a quick experiment and found at least this version works:
image

Unfortunately we did not document the exact version used in FuzzBench.

@DanBlackwell
Copy link
Author

Hopefully running the following should get all 4 up together:

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-20-prescientfuzz-tuning --fuzzers prescientfuzz_no_backoff prescientfuzz_0_999_backoff prescientfuzz_0_9999_backoff prescientfuzz_0_99999_backoff

I'm guessing you might have to tweak something so that it doesn't merge with the other experiments and leave the graphs too messy?

@DanBlackwell
Copy link
Author

AttributeError: module 'matplotlib.cm' has no attribute 'get_cmap'

I reckon this is likely due to a mismatch version of matplotlib, which does not have get_cmap. I did a quick experiment and found at least this version works: image

Unfortunately we did not document the exact version used in FuzzBench.

I fixed it locally; get_cmap is still in matplotlib v3, it just seems that pandas was namespacing incorrectly. Here's my fix in case anyone else comes across the same thing through search:

Replace the bad line at the bottom of the callstack, here style.py:3560:

  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/io/formats/style.py", line 3560, in _background_gradient
    rgbas = plt.cm.get_cmap(cmap)(norm(gmap))

Remove the .cm, so:

-     rgbas = plt.cm.get_cmap(cmap)(norm(gmap))
+     rgbas = plt.get_cmap(cmap)(norm(gmap))

@DonggeLiu
Copy link
Contributor

Hopefully running the following should get all 4 up together:
I'm guessing you might have to tweak something so that it doesn't merge with the other experiments and leave the graphs too messy?

Yep, if you want to compare these 4 only (i.e., no other fuzzers in the report), please set this value to false.

Do you still want to run 5 instances for each fuzzer/setup?
I am happy either way : )

am I allowed to run say 5 instances to test different parameter setups?

Yep sure, this requires changing this value to 5.

@DonggeLiu
Copy link
Contributor

Replace the bad line at the bottom of the callstack, here style.py:3560:

  File "/home/dan/Documents/fuzzbench/.venv/lib/python3.10/site-packages/pandas/io/formats/style.py", line 3560, in _background_gradient
    rgbas = plt.cm.get_cmap(cmap)(norm(gmap))

Thanks, @DanBlackwell!
Let me add your solution to the issue.

@DanBlackwell DanBlackwell force-pushed the prescientfuzz_initial_integration branch from fcdf162 to 3fc873d Compare June 7, 2024 12:08
@DanBlackwell
Copy link
Author

@DonggeLiu I ended up rebasing to include the broken commit and then reverting it - hopefully this solves it for this PR!

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-prescientfuzz-tune --fuzzers prescientfuzz_no_filter prescientfuzz_no_backoff prescientfuzz_0_999_backoff prescientfuzz_0_9999_backoff prescientfuzz_0_99999_backoff

@DonggeLiu
Copy link
Contributor

Experiment 2024-06-07-prescientfuzz data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-prescientfuzz --fuzzers prescientfuzz_all prescientfuzz_all_no_mopt prescientfuzz_direct_neighbours prescientfuzz_direct_neighbours_no_mopt prescientfuzz_direct_neighbours_rarity prescientfuzz_direct_neighbours_rarity_no_mopt

1 similar comment
@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-prescientfuzz --fuzzers prescientfuzz_all prescientfuzz_all_no_mopt prescientfuzz_direct_neighbours prescientfuzz_direct_neighbours_no_mopt prescientfuzz_direct_neighbours_rarity prescientfuzz_direct_neighbours_rarity_no_mopt

@DanBlackwell
Copy link
Author

DanBlackwell commented Jun 10, 2024

Hi @DonggeLiu , there are 3 program-fuzzer pairs that didn't run (nan in screenshot below). Looking at the build logs, they seem to have built successfully, but no directories created under experiment-folders for these particular test setups. Any clue as to why this might have happened?

EDIT: Any chance this has to do with the /gcbrun command appearing twice above?

image

@DonggeLiu
Copy link
Contributor

Hi @DanBlackwell,
This is unlikely related to the repeated commands above.
I did that because the first one had a flaky internet error when launch the experiment, so I triggered it again.

All three cases are caused by failing to get the fuzz target path, exemplified by the message below:

{
  "insertId": "16hdv0lf3gcl9d",
  "jsonPayload": {
    "fuzzer": "prescientfuzz_direct_neighbours",
    "trial_id": "2956754",
    "message": "Error doing trial.",
    "traceback": "Traceback (most recent call last):\n  File \"/src/experiment/runner.py\", line 468, in experiment_main\n    runner.conduct_trial()\n  File \"/src/experiment/runner.py\", line 290, in conduct_trial\n    self.set_up_corpus_directories()\n  File \"/src/experiment/runner.py\", line 275, in set_up_corpus_directories\n    _unpack_clusterfuzz_seed_corpus(target_binary, input_corpus)\n  File \"/src/experiment/runner.py\", line 144, in _unpack_clusterfuzz_seed_corpus\n    seed_corpus_archive_path = get_clusterfuzz_seed_corpus_path(\n  File \"/src/experiment/runner.py\", line 98, in get_clusterfuzz_seed_corpus_path\n    fuzz_target_without_extension = os.path.splitext(fuzz_target_path)[0]\n  File \"/usr/local/lib/python3.10/posixpath.py\", line 118, in splitext\n    p = os.fspath(p)\nTypeError: expected str, bytes or os.PathLike object, not NoneType\n",
    "benchmark": "jsoncpp_jsoncpp_fuzzer",
    "experiment": "2024-06-07-prescientfuzz",
    "instance_name": "r-2024-06-07-prescientfuzz-2956754",
    "component": "runner"
  },
  "resource": {
    "type": "gce_instance",
    "labels": {
      "project_id": "fuzzbench",
      "zone": "projects/1097086166031/zones/us-central1-c",
      "instance_id": "5151368762717282346"
    }
  },
  "timestamp": "2024-06-07T14:54:50.674638367Z",
  "severity": "ERROR",
  "logName": "projects/fuzzbench/logs/fuzzbench",
  "receiveTimestamp": "2024-06-07T14:54:50.674638367Z"
}

There is no further info beyond this.

Could you please check if you can reproduce/fix this locally?
Meanwhile, I will rerun an exp with those three pairs only, just in case something flaky caused this.

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-11-pdn --fuzzers prescientfuzz_direct_neighbours --benchmark jsoncpp_jsoncpp_fuzzer libxml2_xml

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-11-panm --fuzzers prescientfuzz_all_no_mopt --benchmark libpng_libpng_read_fuzzer

@DonggeLiu
Copy link
Contributor

Experiment 2024-06-11-pdn data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

Experiment 2024-06-11-panm data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@DanBlackwell
Copy link
Author

DanBlackwell commented Jun 11, 2024

Hi @DonggeLiu , there have been a handful of others displaying similar (flaky?) behaviour; for example libafl_19f5081 openh264 here: https://storage.googleapis.com/www.fuzzbench.com/reports/experimental/2024-05-27-prescientfuzz/index.html. Again the build log looks to be fine.

I'm afraid I can't see any particular pattern to it (the benchmarks are not always the same, nor are the fuzzers).

@DanBlackwell
Copy link
Author

How can I manually merge the results? I would like to combine a couple of the previous experiments. I've tried doing it by grepping out and copying the lines using the data.csv file linked at the bottom of experiments, but somehow I must be messing it up because the added fuzzer setups don't appear :/

@DonggeLiu
Copy link
Contributor

Experiment 2024-06-12-l19f-openh264 data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-12-l19f-openh264 --fuzzers libafl_19f5081 --benchmark openh264_decoder_fuzzer

@DonggeLiu
Copy link
Contributor

How can I manually merge the results?

I guess you might have seen this?
https://google.github.io/fuzzbench/developing-fuzzbench/custom_analysis_and_reports#generating-reports

added fuzzer setups don't appear

Sorry do you mean the new fuzzers you added did not show up in the report, or the past experiment results of baseline fuzzers (e.g., libfuzzer, afl) did not show up?

@DonggeLiu
Copy link
Contributor

gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-12-l19f-openh264 --fuzzers libafl_19f5081 --benchmark openh264_decoder_fuzzer

This failed because: gcbrun_experiment.py: error: argument -f/--fuzzers: invalid choice: 'libafl_19f5081'.
I guess it has been removed?

@DanBlackwell
Copy link
Author

gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-12-l19f-openh264 --fuzzers libafl_19f5081 --benchmark openh264_decoder_fuzzer

This failed because: gcbrun_experiment.py: error: argument -f/--fuzzers: invalid choice: 'libafl_19f5081'. I guess it has been removed?

Ah, this doesn't need rerunning; I thought I'd point it out in case it's useful for you identifying the source of flakiness (if indeed it is flakiness).

@DanBlackwell
Copy link
Author

How can I manually merge the results?

I guess you might have seen this? https://google.github.io/fuzzbench/developing-fuzzbench/custom_analysis_and_reports#generating-reports

added fuzzer setups don't appear

Sorry do you mean the new fuzzers you added did not show up in the report, or the past experiment results of baseline fuzzers (e.g., libfuzzer, afl) did not show up?

Yeah, so I want to generate a report manually; but merge in the data from other experiments. Downloading the data.csv.gz at the bottom of the page, I can see the format of the data; but I can't seem to merge the data from multiple experiments and get the generated report to show all the setups. Is there a script you use for merging private experiments with the standard experiments? If I can see that script, then I should be able to figure it out by myself.

@DonggeLiu
Copy link
Contributor

analysis/generate_report.py is the script.

I recall I had to manually concat multiple data.csv files and then run the script to show all fuzzers in the past.

@DanBlackwell
Copy link
Author

DanBlackwell commented Jun 13, 2024

Good news is that I figured out how to generate the combined reports that I wanted now (I think I must have broken the CSV in some way before).

@DanBlackwell
Copy link
Author

Hi @DonggeLiu , could you run the following command for me:

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-18-prescientfuzz --fuzzers prescientfuzz_direct_neighbour_fast prescientfuzz_direct_neighbour_fast_w_backoff libafl_rand_scheduler

I believe this will be the final setup and then I'll have all I need to write up the experimentation in full.

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-19-prescientfuzz --fuzzers prescientfuzz_direct_neighbour_fast prescientfuzz_direct_neighbour_fast_w_backoff libafl_rand_scheduler

@DanBlackwell
Copy link
Author

DanBlackwell commented Jun 19, 2024

I'm really sorry, I managed to make a typo there which broke what I was trying to test. @DonggeLiu any chance you could re-run it with?

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-19b-prescientfuzz --fuzzers prescientfuzz_direct_neighbour_fast prescientfuzz_direct_neighbour_fast_w_backoff

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-07-03-prescientfuzz --fuzzers prescientfuzz_direct_neighbour_fast prescientfuzz_direct_neighbour_fast_w_backoff

@DanBlackwell
Copy link
Author

2024-07-03-prescientfuzz experiment links:
experiment data: data
experiment report: report

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants