-
Notifications
You must be signed in to change notification settings - Fork 254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prescientfuzz initial integration #1982
base: master
Are you sure you want to change the base?
Prescientfuzz initial integration #1982
Conversation
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Thanks for submitting a PR, @DanBlackwell! This makes our work a lot easier : ) Once it is ready, we can use the |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-10_prescientfuzz_init --fuzzers libafl aflplusplus prescientfuzz honggfuzz libfuzzer |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-10-prescientfuzz_init --fuzzers prescientfuzz |
@DonggeLiu Has this failed to build? I can't see anything in that CI log |
Experiment |
Yes, I failed to notice that the experiment name does not match this pattern: |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-14-prescientfuzz-init --fuzzers prescientfuzz |
Ok, still some dying from memory starvation. I think I have it fixed now; any chance you could rerun that exact setup for me @DonggeLiu ? Oh, Is there any caching in the docker setup? I've only updated the fuzzer source repo, so if docker caches the build images it probably won't fetch the updated version. |
I vaguely recall that this has caused problems before. I am happy to re-run the experiment when you are ready, please feel free to ping me. |
Ok, have manually specified the commit number which should trash the cache. All ready to go! |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-16-prescientfuzz-init --fuzzers prescientfuzz |
Experiment |
I forgot that it needs |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-17-prescientfuzz-init --fuzzers prescientfuzz |
Experiment |
Hi @DonggeLiu ; any chance you can restart it? Just patched another bug sorry. |
Sure! I've terminated all instances of the previous experiment and approved the CIs. |
The CI looks ok to me, and I ran one of the previously failing benchmarks through the debug-builder earlier. I'm hoping this run should have everything working finally; I appreciate your patience! (I'm trying to build a global CFG without the LTO pass - which has been tricky for me) |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-17-prescientfuzz-ini --fuzzers prescientfuzz |
The Experiment |
Hi @DonggeLiu , finally I have everything building and running; am I allowed to run say 5 instances to test different parameter setups? I'm thinking to add each setup as a different 'fuzzer' (in |
Also, I wanted to generate that report just for PrescientFuzz vs LibAFL (as the graphs are hard to read with so many fuzzers); I tried doing the following but got an error:
I've tried searching, but I'm a bit stumped as to how it's possible for this to happen; although I am not particular experienced with pip / python so maybe matplotlib is just not installed properly? |
Yep sure, this requires changing this value to 5.
Yep this is the simplest way. |
Hopefully running the following should get all 4 up together:
I'm guessing you might have to tweak something so that it doesn't merge with the other experiments and leave the graphs too messy? |
Yep, if you want to compare these 4 only (i.e., no other fuzzers in the report), please set this value to Do you still want to run 5 instances for each fuzzer/setup?
|
Thanks, @DanBlackwell! |
fcdf162
to
3fc873d
Compare
@DonggeLiu I ended up rebasing to include the broken commit and then reverting it - hopefully this solves it for this PR! |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-prescientfuzz-tune --fuzzers prescientfuzz_no_filter prescientfuzz_no_backoff prescientfuzz_0_999_backoff prescientfuzz_0_9999_backoff prescientfuzz_0_99999_backoff |
Experiment |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-prescientfuzz --fuzzers prescientfuzz_all prescientfuzz_all_no_mopt prescientfuzz_direct_neighbours prescientfuzz_direct_neighbours_no_mopt prescientfuzz_direct_neighbours_rarity prescientfuzz_direct_neighbours_rarity_no_mopt |
1 similar comment
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-prescientfuzz --fuzzers prescientfuzz_all prescientfuzz_all_no_mopt prescientfuzz_direct_neighbours prescientfuzz_direct_neighbours_no_mopt prescientfuzz_direct_neighbours_rarity prescientfuzz_direct_neighbours_rarity_no_mopt |
Hi @DonggeLiu , there are 3 program-fuzzer pairs that didn't run ( EDIT: Any chance this has to do with the ![]() |
Hi @DanBlackwell, All three cases are caused by failing to get the fuzz target path, exemplified by the message below: {
"insertId": "16hdv0lf3gcl9d",
"jsonPayload": {
"fuzzer": "prescientfuzz_direct_neighbours",
"trial_id": "2956754",
"message": "Error doing trial.",
"traceback": "Traceback (most recent call last):\n File \"/src/experiment/runner.py\", line 468, in experiment_main\n runner.conduct_trial()\n File \"/src/experiment/runner.py\", line 290, in conduct_trial\n self.set_up_corpus_directories()\n File \"/src/experiment/runner.py\", line 275, in set_up_corpus_directories\n _unpack_clusterfuzz_seed_corpus(target_binary, input_corpus)\n File \"/src/experiment/runner.py\", line 144, in _unpack_clusterfuzz_seed_corpus\n seed_corpus_archive_path = get_clusterfuzz_seed_corpus_path(\n File \"/src/experiment/runner.py\", line 98, in get_clusterfuzz_seed_corpus_path\n fuzz_target_without_extension = os.path.splitext(fuzz_target_path)[0]\n File \"/usr/local/lib/python3.10/posixpath.py\", line 118, in splitext\n p = os.fspath(p)\nTypeError: expected str, bytes or os.PathLike object, not NoneType\n",
"benchmark": "jsoncpp_jsoncpp_fuzzer",
"experiment": "2024-06-07-prescientfuzz",
"instance_name": "r-2024-06-07-prescientfuzz-2956754",
"component": "runner"
},
"resource": {
"type": "gce_instance",
"labels": {
"project_id": "fuzzbench",
"zone": "projects/1097086166031/zones/us-central1-c",
"instance_id": "5151368762717282346"
}
},
"timestamp": "2024-06-07T14:54:50.674638367Z",
"severity": "ERROR",
"logName": "projects/fuzzbench/logs/fuzzbench",
"receiveTimestamp": "2024-06-07T14:54:50.674638367Z"
} There is no further info beyond this. Could you please check if you can reproduce/fix this locally? |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-11-pdn --fuzzers prescientfuzz_direct_neighbours --benchmark jsoncpp_jsoncpp_fuzzer libxml2_xml |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-11-panm --fuzzers prescientfuzz_all_no_mopt --benchmark libpng_libpng_read_fuzzer |
Experiment Experiment |
Hi @DonggeLiu , there have been a handful of others displaying similar (flaky?) behaviour; for example I'm afraid I can't see any particular pattern to it (the benchmarks are not always the same, nor are the fuzzers). |
How can I manually merge the results? I would like to combine a couple of the previous experiments. I've tried doing it by grepping out and copying the lines using the |
Experiment |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-12-l19f-openh264 --fuzzers libafl_19f5081 --benchmark openh264_decoder_fuzzer |
I guess you might have seen this?
Sorry do you mean the new fuzzers you added did not show up in the report, or the past experiment results of baseline fuzzers (e.g., libfuzzer, afl) did not show up? |
This failed because: |
Ah, this doesn't need rerunning; I thought I'd point it out in case it's useful for you identifying the source of flakiness (if indeed it is flakiness). |
Yeah, so I want to generate a report manually; but merge in the data from other experiments. Downloading the |
I recall I had to manually concat multiple |
Good news is that I figured out how to generate the combined reports that I wanted now (I think I must have broken the CSV in some way before). |
Hi @DonggeLiu , could you run the following command for me:
I believe this will be the final setup and then I'll have all I need to write up the experimentation in full. |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-19-prescientfuzz --fuzzers prescientfuzz_direct_neighbour_fast prescientfuzz_direct_neighbour_fast_w_backoff libafl_rand_scheduler |
I'm really sorry, I managed to make a typo there which broke what I was trying to test. @DonggeLiu any chance you could re-run it with?
|
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-07-03-prescientfuzz --fuzzers prescientfuzz_direct_neighbour_fast prescientfuzz_direct_neighbour_fast_w_backoff |
Hi, I have a new fuzzer based on LibAFL that I would like to integrate. I'd like to be able to run an experiment to compare it with the other fuzzers, but the documented approach (adding to https://github.com/google/fuzzbench/blob/master/service/experiment-requests.yaml) doesn't seem to be used lately - is there some automatic experiment that runs periodically?