• 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
  • 🤖 CYBER WEEK-30%
  • Till December, 5-30%
Approaches to testing using fuzzer
"Fuzzing is very useful - it can find vulnerabilities that are difficult to find with any other tools"
Dasha
March 22

Fuzzing

Fuzzing is a very good tool for analyzing the security of a smart contract. Not everyone still uses it, although the tool is quite simple and effective. It is very useful - it allows you to find vulnerabilities that are difficult to find with other tools.
Echidna implements a testing method called fuzzing.

There are four main testing techniques:

1. Unit testing – unit testing. In principle, this is familiar to everyone, since this is a fairly mainstream testing method that almost all developers know.
2. Manual analysis - manual testing - is what we do at Oxorio: they give us a code, we read it and look for errors based on our knowledge, experience, and so on.
3. Fully automated analysis - automatic testing. If we are talking about Solidity code, then the tool that allows you to implement this method is Slither. We just release it in code and it throws errors.
4. Semi-automated analysis – semi-automatic testing. Fuzzing refers to this method.
All methods have their advantages and disadvantages. For example, unit testing allows us to find some errors when they deviate from a certain set of cases that we consider correct: the code should work in a certain way. If we changed something in the code and some test did not work, it means that some kind of error occurred.
Manual analysis is a rather labor-intensive and expensive way to check code for vulnerabilities, because you need to take a team of people who will sit for a long time, read this code, look for errors in it - all this is quite expensive and not always effective. Plus, it has a very significant disadvantage: when code is manually analyzed, as a rule, this is done in relation to a specific commit. At the same time, the code changes, is constantly updated, and if for a particular version we are convinced that there are no errors in the code, there are no vulnerabilities, then as soon as we made the next commit, we changed the code a little - we can no longer guarantee that there are no errors there isn't. Therefore, this method works for some fixed version of the code that does not require updates.
The disadvantage of automated testing is that it often produces false positive and false negative results. It helps to find something, but, as a rule, it is not super effective, especially for complex vulnerabilities.
The last method is fuzzing - semi-automated testing. Semi-automatic because we can’t just take the code and run fuzzing on it. We will have to add something from ourselves, add something. Just as we add unit tessas, for the fuzzer we must create a certain set of conditions, invariants that show how we want the code to work, and the fuzzer will test the code against these conditions. In this sense, it is similar to unit testing.

How it works

Actually, how does a fuzzer work? Very simple. It takes our code, which has a set of public methods: public or external. These methods have some parameters, and the fuzzer randomly launches functions from this code and substitutes random parameters there. After each launch, it checks the tests and properties that we wrote, that they are being executed and that no invariants, no predefined properties of the system are violated. If they are violated by some random set of parameters, then something is wrong.

Example

An example of invariants or properties of a system that can be detected using fuzzing is incorrect access control: a contract has an owner who has access to some set of functions that only he, the contract administrator, can call, and it is assumed that some random user or a third party cannot become this owner and perform privileged functions.
It is not always possible to find such problems using unit tests, for example. But with the help of Echidna, you can sometimes find something that unit tests cannot handle: an incorrect contract stake, for example. When we pause a contract, it is assumed that it is impossible to make any transfers or operations with tokens. We can set this as a property that if the contract status is paused, then tokens cannot be sent. If Echidna finds some set of calls that allow to transfer tokens while the contract is “paused”, we will discover such a vulnerability.
Another example is incorrect arithmetic. For example, when a user can receive an infinite number of tokens as a result of calling any functions with any parameters. At the same time, if we write unit tests, we roughly need to understand what parameters need to be passed in order for this to happen, and we really understand this. But if the system is large, complex, and there are many operations with tokens that are scattered in a large code base, on a large number of contracts, then Echidna allows you to overcome this complexity using the random search method, when it will try all possible methods with all possible parameters , and check every time that The number of tokens on the user’s balance is limited to some adequate number that we set.
If this is not the case, then it will tell us which set of methods was issued to violate such a property.

Invariants

Invariants/properties can be of two main types. This reflects two approaches to fuzzer testing:
We can have fairly simple invariants that we can define at the function level, for example, for the addition property the commutative property must be satisfied: a+b=b+a. A fuzzer can help identify flaws in such code in seconds, that is, we can test the code at the function level and check its functionality. This type is called function-level invariants.
Another type of properties are system-level invariants, which depend on the entire system (system-level invariants). In this case, we will need an already deployed contract; we will not be able to test any individual component/function.

Internal and External testing

Another thing I would like to mention is internal and external testing. They differ in this way: if we want to test some contract using Echidna, for example, a token, then we take this token, inherit from it and in this inherited file we write one single property that we check - for example, the user’s balance is always < total supply. We run this test using Echidna. This will be called internal testing, because we have (when we inherit) access to the internal functions of the contract.
This works in simple cases. But if we have not one contract, but several, and they all must be deployed, addresses in contracts are transferred to each other, and so on, then such a system will no longer work, since we will not be able to inherit and start calling internal functions. The only way we can test into such a system is through external testing. When we deploy a set of contracts in the constructor of our wrapper test, and then call public and external functions (the only ones available to us), and each time after calling these functions we check the invariants - this is called external testing.
Article image
Bootcamp: Smart Contract Auditor
From Solidity fundamentals to reentrancy.
Learn more
Share
You might also like
    Interested in diving into crypto?
    We're here to help!
    or
    Or connect directly