1 Introduction
In recent years, there is a growing interest in distributed algorithms for networks of rational agents that may deviate from the prescribed algorithm in order to increase their profit [1, 2, 3, 6, 13]. For example, an agent may have a higher profit if zero is decided in a consensus algorithm, or an agent may prefer to be (or not to be) the elected leader in a leader election algorithm. The goal is to design distributed algorithms that reach equilibrium, that is, where no agent can profit by cheating.
In this paper we study the consensus problem in a network of rational agents, in which each agent has a preferred decision value. We consider resilient equilibrium, that is, an equilibrium that is resilient to any coalition of up to agents that may collude in order to increase their expected profit (utility). This problem was proposed in [3] and studied also in [4], where the authors suggest an resilient equilibrium for binary consensus in a synchronous ring.
We prove that in any resilient equilibrium for binary consensus, the output of the agents must be the XOR of the inputs of all agents. Thus, due to validity, there is no resilient equilibrium for binary consensus in even sized networks, and the algorithm in [4] works well only for odd sized networks. Still, we show that the algorithm in [4] reaches resilient equilibrium for binary consensus with uniform input distribution, for any .
We further show that multivalued consensus is impossible, i.e., there is no resilient equilibrium for multivalued consensus for where is the number of possible values, thus surprisingly there is a computational gap between binary and multivalued consensus in this model. Note that it was previously shown that in this game theoretic model, leader election is also not equivalent to consensus [4].
Furthermore, we show that in this model, deterministic binary consensus is equivalent to resilient input sharing (RIS), a natural problem in distributed computing in which each agent shares its input with all other agents in the network (a variant of the knowledge sharing problem defined in [4]). That is, in any odd sized network with uniform input distribution, any algorithm for RIS can be transformed into a resilient equilibrium for deterministic binary consensus and vice versa. Thus, providing a sufficient and necessary condition for resilient equilibrium for deterministic binary consensus.
1.1 Our Contributions
are as follows:

Any resilient equilibrium for binary consensus decides on the XOR of all input values.

In any resilient equilibrium for binary consensus the input and output distributions are uniform.

The protocol suggested in [4] reaches resilient equilibrium for binary consensus with uniform input distribution, for any .

There is no resilient equilibrium for multivalued consensus for possible inputs.

Deterministic resilient equilibrium for binary consensus in a network exists iff:

The network size is odd.

The input distribution is uniform.

An equilibrium for Resilient Input Sharing (RIS) is possible in the network topology.

The model, notations and some definitions are given in Section 2, and we discuss our results and further thoughts in Section 6.
1.2 Related Work
The secret sharing problem [16]
initiated the connection between distributed computing and game theory. Further works in this line of research considered multiparty communication with Byzantine and rational agents
[1, 8, 11, 12, 15].In [3], the first distributed protocols for a network of rational agents are presented, specifically protocols for fair leader election. In [4], the authors continue this line of research by providing basic building blocks for game theoretic distributed algorithms, namely a wakeup and knowledge sharing building blocks that are in equilibrium, and equilibria for consensus, renaming, and leader election are presented using these building blocks. The consensus algorithm in [4] claims to reach resilient equilibrium in a ring or complete network, using the knowledge sharing building block to share the input of all processors in the network, and outputting the XOR of all inputs. Consensus was further researched in [14], where the authors show that there is no expost Nash equilibrium for rational consensus, and present a Nash equilibrium that tolerates failures under some minimal assumptions on the failure pattern. Equilibrium for fair leader election and fair coin toss are also presented and discussed in [17], where it is shown to be resilient only to coalitions of sublinear size, and a modification to the leader election protocol from [3, 4] that is resilient to every coalition of size is proposed.
In [5], the authors examine the impact of apriori knowledge of the network size on the equilibrium of distributed algorithms, assuming the space is unlimited and thus vulnerable to a Sybil attack [9]. In [7] the authors remove this assumption and assume the space is bounded, examining the relation between the size of the space and the number of agents in the network in which an equilibrium is possible.
2 Model
We use the standard messagepassing model, where the network is a bidirectional graph with nodes, each node representing a rational agent, following the model in [2, 3]. We assume is apriori known to all agents, is vertexconnected, and all agents start the protocol together, i.e., all agents wakeup at the same time. We can use the WakeUp [4] building block to relax this assumption. In Sections 3 and 4 the results apply for both synchronous and asynchronous communication networks, while Section 5 assumes a synchronous network.
In the consensus problem, each agent has an id and an input and must output a decision . The output can be output by an agent to abort the protocol when a deviation by another agent is detected. A protocol achieves consensus if it satisfies the following [10]:

Agreement: All agents decide on the same value, .

Validity: If was decided then it was the input of some agent, .

Termination: Every agent eventually decides, .
Definition 2.1 (Protocol Outcome).
The outcome of the protocol is determined by the input and output of all agents. An outcome is legal if it satisfies agreement, validity, and termination, otherwise the outcome is erroneous.
Considering individual rational agents, each agent has a utility function over the possible outcomes of the protocol. The higher the value assigned by to an outcome, the better this outcome is for . We assume the utility function of each agent satisfies Solution Preference [3]:
Definition 2.2 (Solution Preference).
The utility function of any agent never assigns a higher utility to an erroneous outcome than to a legal one.
Thus, the Solution Preference guarantees that an agent never has an incentive to sabotage the protocol, that is, to prefer an outcome that falsifies either agreement or validity, or termination. However, agents may take risks that might lead to erroneous outcomes if these risks also lead to a legal outcome which increases their expected utility, that is, if these risks increase the expected utility that the agent is expected to gain.
An intuitive example for a utility function of an agent with a preference towards a decision value of is:
All agents are given a protocol at the start of the execution, but any agent may deviate and execute a different protocol if it increases its expected utility. A protocol is said to reach equilibrium if no agent can unilaterally increase its expected utility by deviating from the protocol.
Definition 2.3 (Nash Equilibrium^{2}^{2}2 Previous works defined equilibrium over each step of the protocol. For convenience, this definition is slightly different, but it is easy to see that it is equivalent.).
A protocol is said to reach equilibrium if, for any agent , there is no protocol that may execute and leads to a higher expected utility for , assuming all other agents follow .
2.1 Coalitions
We define a coalition of size as a set of rational agents that cooperate to increase the utility of each agent in . A protocol that reaches resilient equilibrium [3] is resilient to coalitions of size up to , that is, no group of agents or less has an incentive to collude and deviate from the protocol. We assume coalition members may agree on a deviation from the protocol in advance, but can communicate only over the network links during the protocol execution.
Definition 2.4 (resilient Equilibrium).
A protocol is said to reach resilient equilibrium if, for any group of agents s.t., , there is no protocol that agents in may execute and which would lead to a higher expected utility for each agent in , assuming all agents not in follow .
The same intuitive example for a utility function above holds for a coalition, in which the coalition has a preference towards a decision value .
2.2 Notations
The following notations are used throughout this paper:

 the number of agents in that receive as input.

 the number of agents in that receive as input.

 the input of agent .

 the output value decided by agent at the end of the algorithm.

 the number of possible input and output values. For binary consensus: .
3 Necessary Conditions for ()resilient Consensus
Theorem 3.1.
The decision of any resilient equilibrium for binary consensus must be the XOR of all inputs, that is,
Before we turn to the proof of Theorem 3.1 given in sections 3.1 and 3.2, note that according to this theorem, if is even and all inputs are the decision must be , contradicting validity and leading to the following corollary:
Corollary 3.2.
There is no resilient equilibrium for binary consensus for even sized networks
3.1 Output is the XOR of the Inputs
Theorem 3.3.
If the distribution over the inputs is not uniform, there is no resilient equilibrium for consensus, i.e.:
Theorem 3.4.
In any resilient equilibrium for consensus, given any inputs, the distribution over the possible decision values is uniform:
Notice that while the proof of theorem 3.1 holds only for binary consensus, theorems 3.3 and 3.4 are correct for multivalued consensus as well.
Proof of Theorem 3.1.
We prove that the decision value of binary consensus must be the XOR of all inputs using induction on , the number of agents in the network whose input value is .
In the base case , the input of all agents is . By validity the decision must be .
For clarity of exposition we spell out the next case of the induction, , i.e., the input of one agent is and of all other agents is
. Assume by contradiction that the probability that
is decided in this case is greater than , i.e.,Let be an input configuration for a coalition in which all members of the coalition (i.e., ) claim to receive 0 as input, i.e., . Notice that:
By Theorem 3.3 (and since this is binary consensus) it follows that:
Thus, contradicting Theorem 3.4 and proving that, : Thus if , the decision value must be , proving the first induction step.
By the inductive assumption, the decision value of the consensus must be the XOR of all inputs, i.e., . Let be an input configuration for the coalition () in which , that is, members of the coalition claim to receive , and the rest .
From Theorem 3.4 (and since this is binary consensus) we get:
If (which from Theorem 3.3 happens with probability ) and the coalition acts as if its input is , then . By the induction hypothesis, in such a case the decision value of the consensus must be . To satisfy the equation above it must hold that:
Hence, in case , the decision value must be  the XOR of all inputs. ∎
3.2 Proving Theorems 3.3 and 3.4
While the above proof holds only for binary consensus, the following lemmas and theorems are correct for multivalued consensus.
Lemma 3.5.
In any resilient equilibrium for consensus,
for any ,
given any inputs,
the probability to decide is the same:
Proof.
Assume by contradiction that . A coalition with a preference to decide , and that receives as input, has an incentive to deviate and act as if their input is , contradicting equilibrium. ∎
Lemma 3.6.
In any resilient equilibrium for consensus, for any input , the probability to decide is the same as the probability to receive as an input:
Proof.
For any , if all inputs are then by validity is decided. For any agent , let , then due to validity, the probability that is decided is at least , i.e., . By Lemma 3.5 this is true for any . Thus, . Since and , then: . ∎
Proof of Theorem 3.3.
Assume by contradiction that .
If all agents receive as input the same value , then by validity is decided. Given , the probability that is decided is at least the probability that the input of agent is , i.e., .
If agents receive as input and one agent receives as input the decision must not be otherwise contradicting Lemma 3.6, thus due to validity the decision must be when agents receive and one agent receives .
Let . If agent receives as input then as stated above is decided, thus: , contradicting Lemma 3.6.
Thus, the input distribution must be uniform, i.e.: . ∎
3.2.1 resilient Binary Consensus for any
A binary consensus protocol for any is presented in [4] combining a leader election algorithm with a XOR on selected inputs. In Appendix A we prove that this protocol reaches resilient equilibrium for binary consensus for any , when the input distribution is uniform. Note that the algorithm in [4] does not work in any network topology, but on any network in which Resilient Input Sharing is possible (see [4] and Section 5).
4 No resilient Equilibrium for Multi Valued Consensus
Here we discuss multivalued consensus, where the agreement is between possible values rather than two values. Applying the same logic as in the proof of Theorem 3.1 one can deduce:
Lemma 4.1.
Proof.
The proof is the same as the first and second induction steps in the proof of Theorem 3.1. ∎
Theorem 4.2.
There is no resilient equilibrium for multivalued consensus for any
Proof.
Assume towards a contradiction that there is an resilient equilibrium for multivalued consensus for some . Let s.t. . Denote by any configuration in which the input of one agent is , of another is , and of the rest is . In a run of the protocol starting from , due to validity the network’s decision value must be either or or . We prove that none of these values can be decided in an equilibrium, reaching a contradiction. Consider some Agent and coalition . Define and as follows:

a configuration in which ,

a configuration in which ,
Assume towards a contradiction that . Notice that .
By point of Lemma 4.1, if and the coalition acts as if their input vector is , then must decide . By Theorem 3.3, , therefore, , contradicting Lemma 3.6. Thus, in an equilibrium starting from configuration , the decision value cannot be .
Assume towards a contradiction that: .
Notice that from point of Lemma 4.1, if and the coalition acts as if their input vector is , then must decide upon . As before we get: , contradicting Lemma 3.6. Thus, in an equilibrium starting from configuration , the decision value cannot be .
Applying the symmetric claim for , with a coalition that acts as if their input vector is , we get that in an equilibrium starting from configuration , the decision value cannot be .
Thus, no value from can be decided in an resilient equilibrium for multivalued consensus starting with configuration . Hence, due to validity there is no resilient equilibrium for valued consensus for any . ∎
5 Necessary and Sufficient conditions for Deterministic Consensus
The necessary conditions from Section 3 are extended here into necessary and sufficient conditions for a deterministic resilient equilibrium for binary consensus. Deterministic means that the step of each agent in each round of the algorithm is determined completely by its input and the history of messages it has received up until the current round. In Appendix C some difficulties in trying to extend our proof to nondeterministic algorithms are provided. For the sufficient condition, a new problem  Resilient Input Sharing (RIS), a variant of knowledge sharing [4], is introduced.
Theorem 5.1.
A deterministic resilient equilibrium for consensus exists iff:

is odd

The input distribution is uniform

There exists an algorithm for deterministic RIS (defined below).
5.1 The Resilient Input Sharing Problem
In the RIS problem, agents in share their binary inputs while each agent assumes are in a coalition. Intuitively, each agent requires all other agents to commit their inputs before or simultaneously to them learning about its input. The motivation for this requirement is that we consider problems in which (1) all agents compute the same function on the inputs, and (2) if any one input is unknown, then any output in the range of the function is still equally possible [4, 5]. Therefore the above requirement ensures that the coalition cannot affect the computation after learning the remaining (honest) agent’s input, which is necessary for the computation to reach resilient equilibrium. We use the following definitions:

 Agent j’s knowledge at the beginning of round , including any information the coalition could have shared with it.

Agent is an knower() if at the beginning of round it can make a ’good’ guess about , i.e.,

 the group of all knowers at the beginning of round . In a RIS algorithm, and
Consider for example the network in Figure 1. At Round , sends two different messages, whose XOR is its input, to and . At Round , and can pass these messages to , even if this would not happen in a correct run. Thus: , and .
5.1.1 The RIS Problem
A solution to the RIS problem satisfies the following conditions:

Termination  the algorithm must eventually terminate.

Inputsharing  at termination, each agent knows the inputs of all other agents.

Resilient  at any round , Agent does not receive new information from agents in .
Notice: in a consensus protocol, if is an knower(), and can still influence the output at round , then the protocol is not an resilient equilibrium. Thus, in an resilient equilibrium for consensus, no new information can be sent to from any knower() at round .
5.2 The effect of messages in a XOR computation
We prove that at the end of a distributed XOR computing algorithm, if an agent is given all the chains of messages that have affected its run, it can infer the input of every other agent (Theorem 5.5). This result applies for both deterministic and nondeterministic XOR algorithms.
Remark : In synchronous networks, an agent can pass information to its neighbor through a silent round. Hereafter, every protocol in which informative silent rounds (explained in the proof of Lemma 5.6 and defined formally in Appendix B) occur is altered, and a special message EMPTY is sent instead on the corresponding link.
Remark : Hereafter, we consider networks in which every agent knows the topology of the network before the algorithm starts. Otherwise, the coalition could always cheat and choose a topology in which RIS is not possible (for example, 1connected topology)
Definition 5.2 (Messages recipient).
Let be a run of the protocol and a group of agents.
Definition 5.3 (Agents affected by a message).
In a run , let be a message sent at round to = agent from . Then:

=  Agent is directly affected by .

= Recv(,R,+k)  Agents that were recursively affected by .
illustrates that a message may affect more than just its recipient; Its potential effect propagates through the network, reaching different agents through other messages.
Definition 5.4 (All the (chains of) messages that have an effect on agent in run ).

= , sent in ( terminates at )
Theorem 5.5 (The encoding of all inputs).
Let R be a run of a distributed XOR computing algorithm. Let , Agent can compute from the following information:

 its input.

Decision value i.e., the XOR of all inputs.

 all the messages in that have an effect on Agent .
To prove Theorem 5.5, assume the following base case is correct (to be proved in the sequel):
Lemma 5.6.
Theorem 5.5 is correct for a network of size , .
Proof of Theorem 5.5.
Let be a network where , such that . Create a new network in which agents and are as in , but all other agents in are clustered into one ’virtual’ agent . A distributed XOR algorithm for is:

Agent chooses bits such that the XOR of these bits is its .

Agents and behave in as if they were in , explicitly attaching to each message the id of its destination, while emulates the behavior of the other agents in , attaching to each message the id of its source.
Let and be the input and output of in run . For any run of the algorithm in ,  a run of the algorithm in s.t.,: (1) = , =, (2) = and (3) .
From lemma 5.6 we know that from , and , can be computed. Therefore:
:  , and are enough to compute .
∎
Proof of 5.6.
. Assume towards a contradiction that , two runs of the algorithm such that

=  Agent ’s inputs in and are the same.

 The decision value is the same in both and .

=  Exactly the same set of messages affect in both runs.

 Agent ’s input in is different than in .
Clearly from 1, 2, and 4 it must be that .
Towards a contradiction we construct run , in which ’s and ’s inputs are the same as in and ’s input is the same as in , but the decision value (XOR) in is the same as in .
In , agents and start to perform their steps according to until the first round in which or receive a message that either does not receive in that round in . Agent behaves the same as in , until the first round, denoted round , in which it receives a message it does not receive in that round in . Notice that it is legal for all agents to act this way in round 0. Further, if and can continue according to and can continue according to until termination, then outputs the same value as it would in , which is incorrect for .
 Observation

From round until termination cannot send messages to in either or or otherwise, ’s effect would propagate to , causing  , contradicting point of the assumptions.
 Observation

Similarly from round until termination, cannot send messages to in or otherwise, let be the first round (after ) of in which sends a message to . In  does not send a message to in round (see Observation ). This means that this silent round of between and is informative (it tells that the run is and not ). Since we do not allow informative silent rounds (see Remark ), we reach a contradiction.
Notice that by point in the assumptions, after cannot even communicate with through , since ’s effect would propagate to through . From the two observations above, from round of , cannot communicate with , and from ’s perspective, is running . The same logic applies for  the first round in which it is illegal for to act according to , is a round after which cannot send messages to (even not through ). Thus ’s experience throughout is the same as in , resulting in making an incorrect output. Contradiction. ∎
5.3 Deterministic resilient Consensus implies RIS, completing the proof
In a deterministic synchronous binary consensus protocol, in which all agents start at the same round, for each input vector the run of the algorithm is fully determined.
Let us look at a network running some deterministic binary consensus, with agent and coalition . Intuitively, agents in the coalition can choose in advance an input vector to be used in the algorithm. Thus, from the coalition’s perspective, there can be only two possible runs  in which , and in which . For each agent in the coalition, there is the first round in which and differ, at that point this agent knows . Thus, each agent in the coalition is in one of two states  knows nothing about or knows , this is in contrast to nondeterministic algorithms, see for example Figure 1.
Below we transform any deterministic resilient equilibrium for binary consensus into a deterministic RIS. In Appendix C the difficulties in the nondeterministic case are explained.
Theorem 5.7.
If there exists a deterministic resilient equilibrium for binary consensus, on network then there exists an algorithm for RIS, on .
Proof.
In , each agent runs with the following modifications:

For each message that receives, appends to a local buffer of messages that has affected it.

Agent appends to each message it sends.

Agent adds to all the information piggybagged on incoming messages.
In this new algorithm , every message propagates in the network, reaching all the agents it affects. By the end of the algorithm, the buffer maintained by agent contains , where is the run of the original consensus protocol . By theorem 3.1, is a XOR computing protocol, and by theorem 5.5, ’s buffer contains enough information to infer all inputs. Thus is an RIS protocol.
It remains to prove that is resilient. An input sharing protocol is resilient (Subsection 5.1) if at any round , does not receive new information from agents in . As stated before, this demand applies for resilient equilibrium for binary consensus as well. Thus, to show that is resilient, it is enough to show that :

In each round of , receives messages from the same neighbors it receives from in

In each round of , : in in
The first point is immediate from the construction of . For the second point  observe some agent at round of , which is not an knower in . For to become an knower() in , the coalition must send enough information by for it to make a ’good’ guess about . There are two kind of paths in by which the coalition can send information to  paths that do not pass through , and paths that do.
Through paths not including , the coalition can pass information in the same pace for both and . Since in , using these paths alone is not enough to make an knower() in . Regarding paths that include  as argued in the beginning of this subsection, in a deterministic resilient equilibrium for binary consensus, if a member of the coalition has any information about , then that member knows . Therefor, in , should not receive messages from members of at round . Thus if the coalition has information it wants to pass to , it cannot do so using paths including agent , since does not accept and propagate messages from knowers. To conclude, if is an knower in , is an knower in . Since is resilient equilibrium for consensus, is resilient as well. ∎
5.3.1 Completing the proof, necessary and sufficient conditions for deterministic Consensus
Proof of Theorem 5.1.
Assume that the conditions are realized, and let us suggest a simple resilient equilibrium for binary consensus: run the RIS algorithm and output the XOR of all inputs. Since the RIS algorithm is resilient, no coalition has an incentive to cheat. ∎
6 Discussion
Surprisingly, while there is an equilibrium for binary consensus resilient to coalitions of agents, no such equilibrium exists for multi valued consensus. This is the first model we know of in which there is a separation between binary and multi valued consensus. Intuitively, this is because a coalition with a preference towards has an incentive to cheat and act as if the input of all agents in the coalition is , thus lowering the number of possible decision values (due to validity) to two values, at most. Consider for example the standard bitbybit reduction from binary to multi valued consensus, the probability to decide is now at least instead of , since the decision value is determined by the decision on the first bit of the coalition input that differs from the input of the honest agent. We conjecture that this intuition holds even for smaller coalitions, up to a single cheater. The results in and hold regardless of the network topology, scheduling models, or cryptographic solutions, as they are based solely on the input values and utility of the agents.
Furthermore, we present necessary and sufficient conditions for resilient equilibrium for binary deterministic consensus using the resilient input sharing (RIS) problem. This in fact means that an agent cannot hide its input from the rest of the network in any resilient equilibrium protocol that computes XOR, i.e., even though we only compute the XOR of inputs, at the end of the protocol all agents can deduce the input values of all other agents.
There are several open directions for research:

Extending the equivalence result to nondeterministic consensus and RIS.

Can binary consensus be solved without the conditions of even size and uniform input for coalitions of a smaller size, such as or ?

Does an equilibrium for multivalued consensus exist for coalitions of size or less?
References
 [1] Ittai Abraham, Lorenzo Alvisi, and Joseph Y. Halpern. Distributed computing meets game theory: combining insights from two fields. SIGACT News, 42(2):69–76, 2011.
 [2] Ittai Abraham, Danny Dolev, Rica Gonen, and Joseph Y. Halpern. Distributed computing meets game theory: robust mechanisms for rational secret sharing and multiparty computation. In PODC, pages 53–62, 2006.
 [3] Ittai Abraham, Danny Dolev, and Joseph Y. Halpern. Distributed protocols for leader election: A gametheoretic perspective. In DISC, pages 61–75, 2013.
 [4] Yehuda Afek, Yehonatan Ginzberg, Shir Landau Feibish, and Moshe Sulamy. Distributed computing building blocks for rational agents. In Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, PODC ’14.
 [5] Yehuda Afek, Shaked Rafaeli, and Moshe Sulamy. The Role of Apriori Information in Networks of Rational Agents. In Ulrich Schmid and Josef Widder, editors, 32nd International Symposium on Distributed Computing (DISC 2018), volume 121 of Leibniz International Proceedings in Informatics (LIPIcs), pages 5:1–5:18, Dagstuhl, Germany, 2018. Schloss Dagstuhl–LeibnizZentrum fuer Informatik. URL: http://drops.dagstuhl.de/opus/volltexte/2018/9794, doi:10.4230/LIPIcs.DISC.2018.5.
 [6] Amitanand S. Aiyer, Lorenzo Alvisi, Allen Clement, Michael Dahlin, JeanPhilippe Martin, and Carl Porth. Bar fault tolerance for cooperative services. In SOSP, pages 45–58, 2005.
 [7] Dor Bank, Moshe Sulamy, and Eyal Waserman. Reaching distributed equilibrium with limited ID space. In Structural Information and Communication Complexity  25th International Colloquium, SIROCCO 2018, Ma’ale HaHamisha, Israel, June 1821, 2018, Revised Selected Papers, pages 48–51, 2018. doi:10.1007/9783030013257_9.
 [8] Varsha Dani, Mahnush Movahedi, Yamel Rodriguez, and Jared Saia. Scalable rational secret sharing. In PODC, pages 187–196, 2011.
 [9] John R. Douceur. The sybil attack. In Revised Papers from the First International Workshop on PeertoPeer Systems, IPTPS ’01, pages 251–260, London, UK, UK, 2002. SpringerVerlag.
 [10] Michael J. Fischer, Nancy A. Lynch, and Michael S. Paterson. Impossibility of distributed consensus with one faulty process. J. ACM, 32(2):374–382, April 1985. URL: http://doi.acm.org/10.1145/3149.214121, doi:10.1145/3149.214121.
 [11] Georg Fuchsbauer, Jonathan Katz, and David Naccache. Efficient rational secret sharing in standard communication networks. In TCC, pages 419–436, 2010.
 [12] Adam Groce, Jonathan Katz, Aishwarya Thiruvengadam, and Vassilis Zikas. Byzantine agreement with a rational adversary. In ICALP (2), pages 561–572, 2012.
 [13] Joseph Y. Halpern and Vanessa Teague. Rational secret sharing and multiparty computation: extended abstract. In STOC, pages 623–632, 2004.
 [14] Joseph Y. Halpern and Xavier Vilaça. Rational consensus: Extended abstract. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing, PODC ’16, pages 137–146, New York, NY, USA, 2016. ACM. URL: http://doi.acm.org/10.1145/2933057.2933088, doi:10.1145/2933057.2933088.
 [15] Anna Lysyanskaya and Nikos Triandopoulos. Rationality and adversarial behavior in multiparty computation. In CRYPTO, pages 180–197, 2006.
 [16] Adi Shamir. How to share a secret. Commun. ACM, 22(11):612–613, 1979.
 [17] Assaf Yifrach and Yishay Mansour. Fair leader election for rational agents in asynchronous rings and networks. In Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing, PODC ’18, pages 217–226, New York, NY, USA, 2018. ACM. URL: http://doi.acm.org/10.1145/3212734.3212767, doi:10.1145/3212734.3212767.
Appendix A resilient Consensus for Even
In [4] the authors provide a different protocol for even and odd size networks. Here we prove that the protocol suggested for binary consensus when is even, provides resilient equilibrium for binary consensus. The protocol assumes the existence of an resilient equilibrium for knowledgesharing in order to perform an resilient equilibrium for leader election (notice that in [17], it is shown that in an asynchronous ring, the leader election algorithm of [4] is not resilient to coalitions of size ). Further, the protocol assumes each agent has a unique id, and all agents start the protocol at the same round.
Essentially, the protocol suggested in [4] when is even, performs input sharing in parallel to leader election, then outputs the XOR of all inputs without the leader’s input. It is easy to see that this protocol for consensus satisfies agreement, validity, and termination.
For the rest of this section, let be the agents in a network of even size, executing the protocol in Algorithm 1, with coalition . Also, we assume the input distribution is uniform, i.e., .
Theorem A.1.
Algorithm 1 is an resilientequilibrium for binary consensus.
Proof of Theorem A.1 follows after the following observation and lemmas:
If no agent deviates from the protocol in Algorithm 1
, then the decision value of the consensus is uniformly distributed. Therefore, if after the coalition
deviates, the probability to decide a value preferred by is still , has no incentive to cheat.Lemma A.2.
If at the end of the knowledge sharing step, learns the true value of (or vice versa), has no incentive to deviated from the protocol.
Proof.
In this case at least one of the inputs of agents and is not omitted from the XOR performed by or/and . Since the coalition has no influence on these inputs, which are uniformly distributed, the result of the XOR is also uniformly distributed, and they have no incentive to cheat. ∎
Following Observation A.0 and Lemma A.2 it remains to consider the case in which the coalition can cheat each of and about the input, or id, and/or random value selected in step , of the other.
Lemma A.3.
has no incentive to share with and two sets of ids and random values that disagree.
Proof.
Assume has a preference towards . Denote by the case in which the coalition forced and to elect two different leaders. Notice that to achieve this the coalition must provide and two different sets of ids and random values for all the other agents.
In case the decision value of is independent of the decision value of . Following [4] . Thus,
If does not elect itself as leader, then (based of the uniform input distribution) with probability . Hence:
The same goes for agent . Since the decision of is independent of the decision of (by solution preference, the coalition succeeds only if ):
Since the probability to decide when executing the protocol in Algorithm 1 with no deviation is , there is no incentive for to share different ids or random values with than it shares with (and vice versa). ∎
Lemma A.4.
has no incentive to share a set of input values with and a set with , that disagree.
Proof.
Assume has a preference towards , and denote by the case in which the coalition provides a set of input values with and a set with , that disagree. Like in the previous proof, the decision values of and are independent. By A.3, both agents and elect the same leader, hence that at least one of them is not elected. W.l.o.g, is not the leader. When calculates the XOR (step ), is not omitted from the calculation. Since the set of inputs provided to is independent for (provided by knowledgesharing being resilient), and since does not know in advance , which is uniformly distributed, the result of the XOR is uniformly distributed. I.e.: . Since the probability to reach consensus on when running Algorithm 1 with no deviation is , there is no incentive for to share different input values with , than it shares with . ∎
Proof of Theorem a.1.
From lemmas A.2, A.3 and A.4, we know that, in any run of the algorithm, both and obtain the same knowledge . Since the decision value is uniformly distributed in a correct run, then for any legal knowledge sharing : . This means that has no incentive to choose in advance either a specific set of random values or input values or ids. ∎
Appendix B Informative Silent Rounds and Informative Messages
For this section, let be a run of a distributed XOR algorithm in network .
Definition B.1 (Link experiences).
For any Agent at any round , for all define the incoming link experience of to be:
Similarly, define the outgoing link experience of with at round to be:
Definition B.2 (Round of an agent).
For at round :

:= Agent ’s input.

:= All incoming link experiences has with its neighbors at round .

:= All outgoing link experiences has with its neighbors at round .

:= The decision value of . As long as is not the final round,

:= Round from agent ’s perspective
Definition B.3 (Run of an agent).
For , define to be the projection of on :
Definition B.4 (Prefix and suffix of a run).
For :
I.e. the prefix of up to round . Each prefix of a run has a set of possible legal suffixes of the form:
Definition B.5 (Informative link experience).
Intuitively, informative link experiences are after which ’s execution may be altered. Let . Denote to be a legal that has at round of with . is informative if there exists:

:= Another that has with at round ()

:= A set of outgoing link experiences had with its neighbors at round

:= A set of incoming link experiences had with its neighbors at round not including .

:= A decision value.
Such that the following holds:

Both and are legal rounds for agent in a run of with prefix

 a suffix of ’s run, such that:
Definition B.6 (Informative silent round).
In Subsection 5.2, an informative silent round is actually an incoming link experience has with at round , such that:

silence

is informative
Appendix C Difficulties in extending Theorem 5.7 to Non Deterministic Case
Figure 2 depicts a counter example in a nondeterministic algorithm to the construction in Theorem 5.7. and cannot make a good guess regarding on their own. If however, they were able to combine the information they have acquired, they would become knowers. In the original algorithm, can still receive (send) messages from (to) and (they are not knowers). Applying the construction in Theorem 5.7 on this nondeterministic algorithm, agent would have been able to pass its array of messages, and would have to let it pass through, thus creating an  ’shortcut’ through .
Comments
There are no comments yet.