Do Your Best: A Social Question?

I’ve always struggled with the concept of “doing your best”, especially with regards to avoiding harm. This morning from my sick bed I’ve been playing around with how I could think about formalising this question (I always find formalisation helps understanding). I have not got very far with the formalisation itself, but here are some brief notes that I think could be picked up and developed later for formalisation. Perhaps others have already done so, and if so I would be grateful for pointers to summary literature in this space.


The context in which this arises, I think, is what does it mean to be responsible for something, or even to blame? We might try to answer this by appeal to social norms: what the “reasonable person” would have done. But in truth I’m not a big fan of social norms: they may be politically or socially biased, I’m never confident that they are not arbitrary, and they are often opaque to those who think differently.

So rather than starting from norms, in common with many neurodivergents, I want to think from first principles. How should we define our own responsibilities when we act with incomplete information? And what does it mean to be “trying one’s best” in that situation? And how do we not get completely overwhelmed in the process?


Responsibility and Causality

One natural starting point is causal responsibility. If I take action A and outcome B occurs, we ask: would B have been different if I had acted otherwise? Causal models could potentially make this precise through counterfactuals. This captures the basic sense of control: how pivotal was my action?

But responsibility isn’t just about causality. It is also about what I knew (or should have known) when I acted.


Mens Rea in the Information Age

The legal tradition of mens rea, the “guilty mind”, is helpful here. It recognises degrees of responsibility, such as:

  • Intention: I aimed at the outcome.
  • Knowledge: I knew the outcome was very likely.
  • Recklessness: I recognised a real risk but went ahead regardless.
  • Negligence: I failed to take reasonable steps that would have revealed the risk.

It’s the final one of these, negligence, that causes me the most difficulty on an emotional level. A generation ago, a “reasonable step” might be to ask a professional. But in the age of abundant online information, the challenge is defining what “reasonable steps” now are. No one can read everything, and I personally find it very hard to draw the line.

If we had knowledge of how the information we gain increases with the time we spend collecting that information, we would be in an informed place. We could decide, based on the limited time we have, how long we wish to explore any given problem.


From Omniscient Optimisation to Procedural Reasonableness

However, we must accept that there are at least two levels of epistemic uncertainty here. We don’t know everything there is to know, but nor do we even know how the amount of useful information we collect will vary based on the amount of time we put in. Maybe just one more Google search or just one more interaction with ChatGPT will provide the answer to our problem.

In response, I think we must shift the benchmark. Trying one’s best does not mean picking the action that hindsight reveals as correct. It means following a reasonable procedure given bounded time and attention.

So what would a reasonable procedure look like? I would suggest that we start with the most salient, socially-accepted, and low-cost information sources. We then keep going with our investigation until further investigation is unlikely to change the decision in proportion to its cost.


In principle, we may want to continue searching until the expected value of more information is less than its cost. But of course, in practice we cannot compute this expectation.

A workable heuristic appears then to be to allocate an initial time budget for exploration, and if by the end the information picture has stabilised (no new surprises, consistent signals), then stop and decide.

I suspect there is a good Bayesian interpretation of this heuristic.


The Value of Social Norms

What then of social norms? What counts as an obvious source, an expert, or standard practice, is socially determined. Even if I am suspicious of social norms, I have to admit that they carry indirect value: they embody social learning from others’ past mistakes. Especially in contexts where catastrophic harms have occurred such as in medicine and engineering, norms, heuristics and rules of thumb represent distilled experience.

So while norms need (should?) not be obeyed blindly, they deserve to be treated as informative priors: they tell us about where risks may lie and which avenues to prioritise for exploration.


Trying One’s Best: A Practical Recipe

Pulling these threads together, perhaps “trying one’s best” under uncertainty means:

  1. Start with a first-principles orientation: aim for causal clarity and avoid blind conformity.
  2. Consult obvious sources of information and relevant social norms as informative signals.
  3. Allocate an initial finite time for self-investigation.
  4. Stop when information appears stable. If significant new evidence arises during investigation, continue. The significance threshold should vary depending on the potential impact.
  5. Document your reasoning if you depart from norms.

Responsibility is not about hindsight-optimal outcomes. It is about following a bounded, transparent, and risk-sensitive procedure. Social norms play a role not as absolute dictates, but as evidence of collective learning obtained in the context of a particular social environment. Above all, “trying one’s best” means replacing the impossible ideal of omniscience with procedural reasonableness.

While this approach still seems very vague, it has at least helped me to put decision making in some perspective.


Acknowledgements

The idea for this post came as a result of discussions with CM over the last two years. The fleshing out of the structure of the post and argument were over a series of conversations with ChatGPT 5 on 26th October 2025. The text was largely written by me.

Ontology and Oppression

This Autumn I read Katharine Jenkins’ book Ontology and Oppression. The ideas and approaches taken by Jenkins resonated with me, and I find myself consciously or subconsciously applying them in many contexts beyond those she studies. I therefore thought it was worth a quick blog post to summarise the key ideas, in case others find them helpful and to recommend you also read Jenkins’ work if you do.

Jenkins studies the ontology of “social kinds” from a pluralist perspective – that there can be many different definitions of social kinds of the same name, e.g. ‘woman’, ‘Black’ – and that several of them can be useful and/or the right tool to understand the world in the right circumstances. After a general theoretical introduction, she focuses on gender and race to find examples of such kinds, but the idea is clearly applicable much more broadly.

Jenkins begins by describing her “Constraints and Enablements” framework, arguing that what it means to be a member of a social kind is at least partly determined by being subject to certain social constraints and enablements, which Jenkins classifies in certain ways. These can be imposed on you by (some subset of) society or can even be self-imposed through self-identification as a member of a given social kind. Jenkins defines two types of wrong that can come about as a result of being considered a member of a given social kind, ‘ontic injustice’, where the constraints and enablements constitute a ‘wrong’, and a proper subclass, ‘ontic oppression’, where the constraints and enablements additionally “steer individuals in this kind towards exploitation, marginalisation, powerlessness, cultural domination, violence and/or communicative curtailment”. She argues that a pluralist framework can be valuable as a philosophical tool for liberation, and studies how intersectionality arises naturally in her approach.

The race and gender kinds Jenkins studies, she classifies as ‘hegemonic kinds’, ‘interpersonal kinds’ and ‘identity kinds’. I find this classification compelling for wanting to really understand power structures and help people rather than simply shout about identity politics from the sidelines – a form of intervention that sadly characterises much of the ‘debate’ in ‘culture wars’ at the moment. It also provides a useful toolbox to understand how a social kind (e.g. ‘Black’, ‘woman’) can be both hegemonically oppressive and yet corresponding interpersonal and identity kinds can sometimes serve an emancipatory function.

Ultimately, Jenkins’ description allows us to break away from some of the more ridiculous lines of argument we’ve seen in recent years, trying to ‘define away’ issues. At the end of the book, Jenkins takes aim at the ‘ontology-first approach’: the idea that one should first settle ‘the’ meaning of a social kind, e.g. ‘what is a woman?’ and from that apply appropriate (in this case gendered) social practices. This approach, so widespread in society, Jenkins shows does not fit with her framework. She challenges us to ask: what do we actually want to change about society? And from that, to understand what kinds make sense to talk about, and in what context, and how.