In Reinforcement Learning there is often a need for greater sample efficiency when learning an optimal policy, whether due to the complexity of the problem or the difficulty in obtaining data. One family of approaches to tackling this problem is Assisted Reinforcement Learning, in which external information is transferred to the agent, for example in the form of advice offered by a domain expert. But these approaches often break down when advice is offered by multiple experts, who can often contradict each other. In general, experts (especially humans) can give incorrect advice. Our work investigates how an RL agent can benefit from good advice it receives while being robust to bad advice, and how it can exploit consensus and contradiction among a panel of experts to maximise information gained.