Django Ethics App | A Discussion on Ethics as it Relates to Web Technologies and Conway's Law

@RobertBunge shared this in the WhatsApp group and I thought it would make an interesting discussion:

I’ve been bouncing back and forth on Gemini between personal research (history of religion and philosophy) and technical research for a class I am teaching (Django). Gemini remembered both of these contexts and asked me if I wanted to see a project combining both. I figured, there was not much to lose by taking a look. The results are linked below . In a recent white paper, Rufus, Rosie, and Sylvie raised the issue of using technology as a God. OK, well that sounds like blasphemy to me (quite a bit of that going around nowadays, BTW ..). But is it possible to align agentic AI with godly purpose? The linked document suggests on at technical level what it might take to implement ethical perspectives in current information environments.

I responded as follows, and I think this topic is important at the present time for obvious reasons.

Conway’s Law says that organizations tend to design systems that mirror their own communication structure. By analogy, if you were to design a god or moral compass using Django, I would expect that system to mirror the structure of Django - to be constrained by Django’s abstractions and implementation patterns. That is not necessarily a death sentence for the idea, but it is something to consider.

Bob’s response:

Django is basically 3-tier web architecture. Its specific flavor of that is to facilitate zero-brainer content management by non-technical users. (It was originally developed by a newspaper in Kansas) . The idea of screening AI API calls and logging them is not especially Django-specific. There may be something to think about here in relation to Conway’s Law, but I doubt it has much to do with Django per se.

My point is basically the following:

a) a system built in Django will tend to inherit Django’s way of organizing the world
b) that inherited structure will shape what kinds of moral reasoning are easy, hard, visible, or invisible
c) therefore, the resulting “moral compass” may reflect framework logic more than moral reality

Specific properties of Django that make a difference to how an ethical system implemented in it might function:

1. It encourages app-level enforcement instead of system-level enforcement.
Django makes it natural to put logic in views, middleware, forms, serializers, model methods, and admin hooks. That is productive, but it can scatter policy across the codebase. For an AI agent, that is dangerous because ethics and safety rules need to be centralized and non-bypassable. If one code path goes through a view check, another through a Celery task, and another directly through the ORM, you end up with uneven enforcement.
That is how you get a “less ethical god”: not evil, just inconsistent.

2. Middleware feels more universal than it is.
Django makes middleware feel like a chokepoint, but it only sees HTTP request flow. If your agent acts through workers, scheduled jobs, internal services, management commands, or direct tool calls, middleware is irrelevant. So a team may falsely believe they have built a moral firewall when they have really built a web firewall.

3. The ORM makes data access too easy.
Django’s ORM is one of its strengths, but morally sensitive systems often need friction around access:

  • who can read this field

  • under what purpose

  • with what audit trail

  • with what escalation path

The ORM is optimized for developer convenience, not ethical review. If an agent or tool wrapper gets model access too directly, it becomes easy to over-read, over-join, or expose fields without deliberate policy checks.

4. “Fat model” culture can blur policy and behavior.
Django often pushes domain logic into model methods or manager methods. That is good for business rules, but ethics rules are not just business rules. They are often cross-cutting:

  • privacy

  • consent

  • fairness

  • escalation

  • conflict of interest

  • human approval thresholds

If those get buried in model methods, they become hard to audit and easy to bypass.

5. Signals can create hidden governance.
Signals are attractive for “ethical reactions” like lockouts, alerts, or red flags. But signals can make the most important controls implicit and hard to trace. If a critical safety action depends on a signal firing somewhere off to the side, you have made governance less legible. A moral system should usually be explicit.

6. Django admin can normalize overpowered human access.
Django admin is fantastic, but it encourages a culture of broad internal visibility. In a safety-sensitive AI system, the human review layer itself needs constraints. Otherwise the “ethical oversight” tool becomes a privileged backdoor where reviewers see too much and act with too little accountability.

7. Monolith comfort can hide trust boundaries.
Django is very good at the coherent monolith. That can be a feature. But when you are building agent governance, you often need hard separations:

  • planner vs executor

  • user-facing app vs policy engine

  • model reasoning vs tool authorization

  • normal logs vs tamper-evident audit logs

A monolith makes it easy to blur those lines because everything can call everything.

So the short version is: Django tends to produce a conveniently governed system, and convenience is often the enemy of serious ethical control.

Now to your second question: yes, other frameworks can push you toward different “ethical shapes,” though not automatically better ones.

Laravel

Laravel has a somewhat similar risk profile to Django.

It is elegant, batteries included, and productive. Like Django, it can encourage:

  • app-layer policy checks

  • convenient ORM access

  • framework-native abstractions that feel safer than they are

  • governance mixed with ordinary application logic

Laravel’s authorization features can be strong, and PHP web apps often have very explicit request lifecycles, which can help for ordinary permissioning. But for agent ethics, Laravel would still tend to shape the system around the web app and its service container. So compared to Django, I would expect different syntax, similar temptations.

Laravel may encourage slightly more explicit service-layer organization in some teams, which can help if you build a central policy service well. But that is a team habit, not a guarantee from the framework.

Luminus

Luminus is more interesting in this discussion.

Because it is smaller, more composable, and rooted in Clojure’s functional style, it can push you toward:

  • explicit data flow

  • pure functions

  • fewer magical hooks

  • clearer separation of transformation vs effect

  • more visible boundaries

That can actually help ethical design. A policy engine is often better expressed as:

  • input facts

  • deterministic evaluation

  • decision output

  • separate execution step

Functional architecture tends to fit that cleanly. You are less likely to hide key moral logic in lifecycle hooks or ORM conveniences and more likely to model it as a decision function.

But Luminus has its own tradeoff: because it gives you less scaffolding, you can end up with a fragmented system if the team is not disciplined. So it may produce a clearer ethical core but require more intentional engineering around observability, admin tooling, and workflow management.

Other framework “shapes”

If you zoom out, the ethical differences come less from language and more from architectural bias.

A framework that biases toward:

  • pure functions

  • explicit dependency injection

  • effect isolation

  • message passing

  • immutable event logs

  • capability-based access

will usually make it easier to build a serious governance layer.

A framework that biases toward:

  • global app context

  • convenience hooks

  • broad ORM access

  • “just put it in middleware”

  • implicit side effects

will usually make it easier to build something that looks ethical in demos but has bypasses in practice.

So if we apply your Conway’s Law lens:

  • Django mirrors a pragmatic, centralized, productive web team. Ethics becomes admin panels, middleware, model rules, and audit tables.

  • Laravel mirrors a polished app-framework organization with strong developer ergonomics. Ethics becomes service classes, policies, guards, and app conventions.

  • Luminus mirrors a smaller, more explicit, composition-oriented team. Ethics becomes dataflow, rules engines, and function boundaries.

That does not mean Luminus is morally superior. It means it is more likely to force the team to state the ethical machinery directly rather than smuggling it into framework glue.

My actual recommendation would be:

Use Django if you want the fastest path to:

  • review UI

  • audit logs

  • workflow states

  • human escalation

  • internal operations tooling

But do not put your core ethical logic “inside Django” as scattered framework features. Put the real compass in a separate, explicit layer:

  • policy engine

  • rule evaluator

  • action broker

  • capability checks

  • immutable audit trail

Then let Django be the shell around that.

If you wanted a framework more naturally aligned with explicit ethical reasoning, I would look less at “Django vs Laravel” and more at architectures built around:

  • functional cores

  • event sourcing

  • policy-as-code

  • capability-based design

That is where the deepest difference shows up.

A crisp way to say it:

Django risks a less ethical god when it tempts you to confuse framework control points with moral authority.
The problem is not that Django is weak. The problem is that it is so strong and convenient that you may stop noticing where the real boundaries should be.

I think a more fruitful discussion might be around different approaches to software architecture and how an ethical moral compass might be implemented therein. E.g. DDD, Onion, Reactive, etc.

For example,

DDD
DDD is probably the best fit if you want ethics to live in the meaning of the domain rather than as bolt-on filters. You can model concepts like consent, obligation, harm, exception, appeal, stewardship, or fiduciary duty as domain concepts instead of just validation rules.
Its strength is that ethics can be embedded in aggregates, domain services, and ubiquitous language.
Its weakness is that it can still make morality look cleaner and more formalizable than it really is.

Onion / Hexagonal / Clean Architecture
This is very strong if your point is that ethical rules should not depend on framework details.
You can put moral reasoning or policy evaluation in the domain core, and keep Django, databases, queues, and LLM providers in outer layers. That lets you say: the ethical policy is primary; the framework is just a delivery mechanism.
This is probably the best answer to the “it will just mirror Django” objection.

Reactive / Event-Driven
This is interesting because it treats ethics less as a gate and more as ongoing response.
Instead of one central “moral compass,” you get streams of events, monitors, compensating actions, escalation flows, and continuous oversight.
That fits real organizations pretty well, because ethical failures are often discovered after partial action has already occurred.
The weakness is fragmentation: responsibility can get diffused across many consumers and handlers.

Layered MVC-style architecture
This is the default shape many Django projects drift into. Ethics here often becomes middleware, decorators, serializers, and validators.
That is practical, but also the architecture most likely to reduce morality to compliance checks.

Microservices
This raises a different question: is there one moral authority for the whole system, or does each service own its own policies?
That can be realistic, because actual organizations often have plural and conflicting values.
But it also means inconsistency, policy drift, and hard-to-audit decisions across service boundaries.

Rule engine / policy engine architecture
This is useful if you want explicit and inspectable ethical rules.
The upside is traceability.
The downside is that what can be expressed tends to become the only thing treated as ethically real.

For context, the Django class I am teaching is 100-level, so yeah, Django. Not anything more complicated or advanced! As I expressed to the class last session, I have no idea if Django will still be used at all 2 or 3 years from now, so don’t get too hung up on the specific technology. Focus instead on project management and learning processes.

That said, Gemini offered to show me some implementation code, so I took a look. Not much to see there, just some Django ORM language that would be needed for any sort of of Django app. However, on an architectural level, I do like this one phrase it used for framing:

‘In Django, the Model is the “Source of Truth.” For an AI Auditor, you want to capture not just what the AI did, but why it thought it was okay’ (For those not familiar with Django, the ‘Model’ is the database schema and the data contained in the back end database).

That, I believe, cross applies to a lot of things. I’ve been consuming way too many YouTubes about how to become “AI native” and one thing that jumps out at me is that data is the crown jewels. UI can be vibe coded. Your data warehouse, not so much. Ethics+Governance+Security question number one is who gets access to the data? Then we can start spinning Defense in Depth, Separation of Duties, no Single Point of Failure, and other classic security ideas around that. Of course it’s a dumb idea to make apps do all the governance work. It’s a dumb idea to make any given layer or technology do all the governance work. (Kind of sounds like something one might wish to consider for purely human governance systems as well!)