Canvas Foreword

The following content is a discursive “canvassing” exercise intended to: process ideas and prime them for more formal publication; foreground thought processes in the spirit of auto-discourse (see A Primer on Auto-Discourse); garner feedback from peers; establish conceptual provenance for ideonomic archiving purposes.

Link to original

One of the more promising yet, to my knowledge, unexplored applications of AI in governance is that of delegate agents, or governance personae trained on the preferences of the delegator. In the spirit of theoretical governance1, it might be helpful to explore some of the implications surrounding the usage of personal AI for the purposes of governance, specifically onchain governance2.

In certain respects, one can view the prospects rather optimistically. Issues like voter fatigue, delegate corruption or inactivity, and governance groupthink can conceivably be addressed via some elegant arrangement of delegate agents. For the purposes of this inquiry, I will make several assumptions about things I do not deeply understand, regarding the technical feasibility of such theoretical governance arrangements.

Let us, then, assume an arrangement whereby participants in a given voting body, say an onchain organization, each have at their disposal a personal AI which they have complete control over, the compute-related costs of which are paid for by the participant, in exchange for the associated conveniences below enumerated. Governance participants could then articulate their beliefs and preferences in some form of ontologically suitable documentation, and their personal delegate agent could then be trained on this documentation. The agents would then automatically read each new proposal in its entirely, and draw from the training documentation to predict the vote of the user.

But what if the delegate agent predicts incorrectly? Here let us also assume a governance arrangement whereby members have a week to vote on a proposal. The delegate agent would, practically instantly, predict the decision of the user, based on the proposal’s voting options and the user’s training documentation, and queue up a voting decision for the user to confirm or deny. This would give the user the opportunity to participate directly, should the predicted decision be deemed unsatisfactory. If the voting period elapses without manual intervention, the queued vote will be locked in. In other words, the agent’s prediction is optimistic, in the sense of optimistic governance, and is on track to be confirmed unless vetoed by the delegator.

Delegate agents could also conceivably retain some appropriate measure of memory, pertaining to the history of previous proposals, amendments, operating agreements, and other documents approved by the user. Here, again, I am not knowledgeably enough to elaborate on the exact technical details of this training, but for the sake of theoretical governance I assume it is either possible already, or will be soon.

As far as how the onchain governance could be facilitated, we would need onchain agents which can participate in voting. The voting can be either onchain or offchain, but the agent itself would need to constitute a viable delegate, according to the various existing standards for onchain delegation. Regardless implementation specificities, the user should be able to undelegate at their discretion, i.e. the delegation arrangement should be as liquid as possible, in light of possible failure scenarios regarding compromised or maladjusted agents. In a sense, such an arrangement would arguably render viable the prospects of an absolutely liquid representative democracy.

In the event that it proves too expensive or inefficient for every user to have their own delegate agent, users could collectively articulate their governance preferences in some joint documentation, and pool their delegation to a given agent which is trained on said documentation. That said, and perhaps this is more of a personal preference on my part, I stress the importance of sovereignty in this considerations, namely that, in terms of the technical capabilities of such an arrangement, the user should have complete liberty to train and retrain their delegate agent, undelegate at their own discretion, and manually override or veto any queued voting decision.

Many of the possible failure scenarios which come to mind would be mitigated by the user retaining ultimate control over their delegate agent. That said, as markets seem to reliably prefer convenience over sovereignty and privacy, it would not surprise me to see a productized fleet of custodial delegate agents rolled out, a development which may still conceivably solve the issue of voter fatigue, at the cost of some vulnerability to monopolized governance power. That said, it does not strike me as implausible to expect a self-custodial option, even a productized one where users pay a subscription for access to compute. Here, again, I am not familiar enough with the relevant technical and industrial details.

At any rate, the risks of such an arrangement seem to be deeply mitigated by a sufficient degree of custodial control exercised by the delegator. That is, so long as the delegator has the option to deactivate, retrain, veto, or undelegate from their delegate agent, the edge cases around rogue agents seem much more manageable. This complete level of control would also equip delegators to defensively respond to maliciously worded proposals designed to semantically appeal to the training documentation of the delegate agents. In an arrangement wherein users do not adequately control their delegate agents, events could rapidly devolve into a runaway governance dialogue between maliciously well-worded proposals and semantically hostage delegate agents.

Another interesting consideration is that, given this hypothetical arrangement, users would be able to train their agents based on compliance documentation, and these delegates could then, with some degree of accuracy, ascertain certain legal risks associated with a given proposal. Likewise, these agents can also conceivably be trained on the organization’s constitutional documentation, to predictively detect unconstitutional aspects of proposals. The methodology required to effectively train agents in these respects is, again, beyond my understanding.

All in all, it seems there are some compelling possibilities surrounding the usage of personal agents for governance delegation purposes. Some of the risks highlighted here would seemingly be mitigated by granting users as much control as possible over their agents, but of course there are likely risks which would persist even given such precautions.

  1. See Notes on the Distinction between Theoretical and Applied Governance
  2. See What Are Onchain Organizations?