The science behind Jyotirmay's daily guidance
Jyotirmay's daily guidance is built as a practical behavior-support system that turns what you are facing today into short, usable next steps tailored to your personality profile. This page explains the method behind that guidance: the real-world problem it is designed to solve, the inputs it uses, the behavioral approaches it draws from, how personality changes the recommendations, how AI is narrowly and safely applied, and what limits are built into the system. The aim is not prediction or diagnosis, but clear, situation-specific coaching that is easier to act on in everyday life.
Abstract
Jyotirmay delivers brief, situationspecific behavioral guidance intended as nonclinical coaching support for everyday functioning. Guidance is personalized using a FiveFactor trait profile and the user’s selected daily situations. The guidance layer is explicitly separated from the Jyotish computation layer. This document summarizes the behavioral science foundations, personalization logic, and safety constraints, and clarifies boundaries relative to psychotherapy and psychiatric practice.
1. Objective and boundary conditions
1.1 Objective
The guidance system aims to help users convert intention into action in daytoday contexts by providing:
- small, actionable steps,
- clear “next best action” suggestions,
- situationspecific coping and planning microskills,
- personalization to reduce friction and increase adherence.
- 1.2 Boundary conditions
The system is intentionally bounded:
- It is not psychotherapy.
- It is not diagnosis.
- It does not attempt to treat psychiatric disorders, manage crises, or deliver clinical risk assessment.
- It does not provide medical, legal, or financial advice.
- 2. Separation of Jyotish and guidance layers
Jyotirmay is architected as two separable components:
- Jyotish computation layer: deterministic calculation of chart and pañcāṅga entities.
- Guidance layer: behavioral coaching suggestions generated from personality traits + userselected situations.
- The guidance layer does not require astrological signals to operate and should remain independently auditable. 3. Inputs and representation
Guidance generation uses:
- Situation selection: user picks anticipated contexts (e.g., meeting, conflict, decision pressure, focus work, social engagement).
- Optional state inputs: selfrated energy/mood or time constraints.
- Trait profile: stable dispositions from the Big Five structure.
- User constraints: language preference, tone preference, and any safety exclusions.
The system represents each selected situation in a structured schema including:
- situation category,
- time horizon (today),
- difficulty level (if provided),
- and constraints (time available, environment).
4. Behavioral science methods used
The guidance system uses a modular “microintervention library.” Modules are selected and parameterized; final language is produced as short steps. 4.1 Implementation intentions (If–Then planning)
A core technique is explicit contingency planning:
- “If X occurs, then I will do Y.”
Implementation intentions are used for predictable obstacles (procrastination triggers, avoidance, social discomfort, impulsive responding). 4.2 CBT-informed cognitive and behavioral micro-skills (non-clinical adaptation)
The system uses CBT-consistent principles in a coaching frame, such as:
- cognitive reappraisal prompts (alternative interpretations),
- behavioral activation microsteps (start small; schedule; reduce avoidance),
- coping skills for stress and uncertainty (brief grounding; attention shifts),
- perspective taking and “test the thought” prompts where appropriate.
- These are presented as skills practice, not as treatment protocols, and avoid clinical language that implies diagnosis.
4.3 PST-informed structured problem solving
For practical difficulties (workload, decision overload, interpersonal friction), the system uses a structured sequence:
- define the problem in one sentence,
- generate options without judgment,
- select one next action under constraints,
- evaluate after execution and iterate.
The goal is to support functional problem solving rather than symptom reduction as a clinical endpoint.
4.4 Communication micro-skills for high-friction interactions
When a situation involves conversation (manager discussion, partner conflict, boundary setting), the system provides:
- one-sentence “ask” formats,
- calm boundary statements,
- and “observe–feel–need–request” style scaffolds,
- while avoiding prescriptive counseling language and avoiding advice that could escalate risk.
5. Personalization logic (trait × situation × state)
Personalization is implemented as constraint-based selection and phrasing:
- Low Extraversion: prefer lowactivation, lowexposure steps; reduce “networking” demands; emphasize preparation and controlled contact.
- Low Emotional Stability: frontload regulation and grounding microsteps; shorten action steps; avoid excessive uncertainty.
- High Conscientiousness: leverage planning, checklists, and followthrough; avoid overstructuring if it increases perfectionism.
- High Openness/Intellect: use meaning/curiosity reframes; allow experimentation; propose learningoriented actions.
- Low Agreeableness / high assertiveness: emphasize repair strategies and clarity; prevent interpersonal escalation.
- Importantly, personalization is used to improve adherence and reduce friction, not to infer psychopathology.
6. AI role and constraints
If an AI component is used, it should be limited to language realization under strict controls:
- Single-step generation from a structured plan (headline + steps + if–then plan + safety note).
- Schema enforcement: outputs must fit a predefined structure; freeform counseling narratives are disallowed.
- Prohibited behaviors: diagnosis, crisis counseling, clinical instruction, prescriptive medical/legal/financial directives.
- Safety filters and red flags: detection of self-harm, violence, or acute distress should trigger safe redirects and recommendations to seek qualified help; the system should not attempt to “handle” crises.
7. Safety and clinical governance
For clinical reassurance, governance should include:
- scope disclaimers at the point of guidance delivery,
- content exclusions (no medical/legal/financial),
- highrisk content detection and safe escalation messaging,
- audit logs for system prompts and schema outputs (internal, privacypreserving),
- monitoring for adverse events and user complaints.
8. Evaluation and continuous improvement
A scientifically defensible evaluation approach includes:
- user comprehension and adherence metrics,
- perceived helpfulness and selfefficacy measures,
- behavioral outcomes appropriate to coaching (e.g., completion of intended action),
- safety monitoring and complaintdriven review,
- analysis of personalization effects (does tailoring improve adherence vs generic guidance).
- Clinical efficacy claims should not be made without appropriately designed trials and ethical approvals where needed.
9. Limitations
- Evidence from clinical interventions does not automatically transfer to nonclinical, brief, appdelivered coaching; effect sizes may shrink.
- Selfreport inputs can be noisy and statedependent.
- Traitbased personalization can be miscalibrated if the trait measurement is low confidence.
- The system must avoid “therapy mimicry” that could be misinterpreted as clinical care.
References
- Hofmann, S. G., et al. (2012). “The Efficacy of Cognitive Behavioral Therapy: A Review of Meta-analyses.” (Open access, PMC):
- https://pmc.ncbi.nlm.nih.gov/articles/PMC3584580/ (PMC)
- Cuijpers, P., et al. (2007). “Problem solving therapies for depression: a meta-analysis.” (PubMed record):
- https://pubmed.ncbi.nlm.nih.gov/17194572/ (PubMed)
- Bell, A. C., & D’Zurilla, T. J. (2009). “Problem-solving therapy for depression: a meta-analysis.” (PubMed record):
- https://pubmed.ncbi.nlm.nih.gov/19299058/ (PubMed)
- Zhang, A., et al. (2018). “The Effectiveness of Problem-Solving Therapy for Primary Care Patients’ Depressive and/or Anxiety Disorders: A Systematic Review and Meta-Analysis.” (PubMed record):
- https://pubmed.ncbi.nlm.nih.gov/29330248/ (PubMed)
- Gollwitzer, P. M., & Sheeran, P. (2006). “Implementation Intentions and Goal Achievement: A Meta-analysis of Effects and Processes.” (ScienceDirect landing page):
- https://www.sciencedirect.com/science/chapter/bookseries/abs/pii/S0065260106380021 (ScienceDirect)
- Wang, G., et al. (2021). “A Meta-Analysis of the Effects of Mental Contrasting With Implementation Intentions on Goal Attainment.” (Open access, PMC):
- https://pmc.ncbi.nlm.nih.gov/articles/PMC8149892/ (PMC)
- World Health Organization (2021). “Ethics and governance of artificial intelligence for health: WHO guidance.” (WHO publication page):
- https://www.who.int/publications/i/item/9789240029200 (World Health Organization)