Preview Environment
Instructor Overview - Free to Explore

A data-driven simulation for MIS and business information systems courses

Students work with a multi-layered dataset ecosystem, analyze and join data files in Excel, make evidence-backed resource-allocation decisions, justify their reasoning in writing, and see measured results after each turn. Instructors get a configurable teaching platform built around the data-driven decision-making cycle, with engagement analytics, reviewable AI-supported reasoning feedback, and a free sample section to explore before adopting.

Key terms:
Section = your course environment
Turn = one decision cycle
Team / Candidate = student group + their simulated campaign
Action Plan = two-part reasoning form submitted before each action
Sample Section = preconfigured exploration environment for instructors
Data Acquisition = purchasing additional intelligence datasets during the simulation
What MIS / BIS professors usually want to know first

Does this teach real analytical work, or just run a game?

That is the adoption question this page should answer quickly. MISsimulation is built to show faculty three things early: the data is rich, the decisions are evidence-driven, and the competitive, resource-constrained environment forces students to think strategically about ROI, allocation, and timing.

๐Ÿ—‚๏ธ Rich dataset ecosystem
17+ named datasets across foundational, financial, behavioral, social-network, and recurring intelligence layers, not a single prebuilt dashboard.
๐Ÿง  Competitive resource allocation
Students decide what data to buy, where limited budget will earn the best return, which segments are worth pursuing, and when a competitor's weakness is worth attacking.
๐Ÿ“‹ Assessment-ready instructor view
Reasoning scores, instructor review/override, participation signals, and CSV exports support grading, AoL reporting, and accreditation documentation.

Why a campaign simulation works for MIS and business information systems teaching

The election theme is the setting, not the learning objective. Students practice the analytical and managerial tasks MIS courses are built around, in a context that feels concrete and competitive rather than abstract. Critically, students must make <strong>information-acquisition decisions</strong>: what data to purchase, when to research rather than act, and how to combine multiple sources into an actionable model. Because resources are limited and competitors are acting at the same time, students cannot go after everything. They have to allocate scarce budget, choose the highest-yield targets, and adjust when the competitive landscape changes.

โ†’ Working with structured CSV datasets and inspecting raw data
โ†’ Spreadsheet analysis, filtering, and segmentation in Excel or Google Sheets
โ†’ Cost/reach tradeoffs, ROI thinking, and scarce-resource allocation
โ†’ Decision support under uncertainty - buying data, weighing options
โ†’ Evidence-based reasoning: explaining decisions with data, not intuition
โ†’ Iterative strategy: measuring results and improving decisions turn over turn
โ†’ Information-acquisition decisions: choosing which datasets to purchase, when to invest a turn in research, and how to manage intelligence costs against budget
โ†’ Joining multiple data sources: combining voter, financial, and behavioral files into a single decision model - the integration work analysts actually do
โ†’ Reasoning from behavioral proxy data: working with spending patterns, utility behavior, and service-complaint signals, not only direct survey answers
โ†’ Strategic targeting under pressure: choosing where not to spend, because limited budget makes prioritization part of the learning
โ†’ Competitive response: reading rival movement, spotting weak areas, and countering where the data suggests the highest return

Why Instructors Choose MISsimulation

๐Ÿ“Š
Rich Dataset Ecosystem
Students work across 17+ named datasets, including behavioral proxy signals and recurring intelligence, closer to real analytics than a single spreadsheet exercise.
๐Ÿค–
Reasoning Assessment You Control
The AI scores use of evidence, realism of predictions, and quality of reasoning. You review, annotate, or override feedback before students see it.
๐Ÿ“ˆ
Assessment & AoL Reporting
Export per-student grades, reasoning indicators, and participation data into your gradebook or AoL workflow. Useful for AACSB-style direct measures.
๐ŸŽ“
Flexible Course Control
Choose turn count, pacing, modules, datasets, language, and grading emphasis, then explore a free sample section before adopting.
For Colleagues and Faculty

Know a colleague or professor who teaches MIS, analytics, or a related course?

If this simulation looks like a fit for a colleague, you can send them a quick recommendation. It takes about 30 seconds and helps them discover the tool on their own terms.

1
Student Registers & Joins Course
~2 minutes

Students receive a <strong>course code</strong> from you (shared via LMS or syllabus). They go to missimulation.com, enter the code, pay the one-time $28.99 access fee, and land directly in their team dashboard. No software to install.

Reinforces: digital onboarding literacy, low-friction system access
โœ“ Works on any device
โœ“ Institutional email accepted
โœ“ Team assigned automatically
2
Download & Analyze the Data
~30โ€“60 minutes outside class

At the start of each turn, students download one or more CSV datasets from their dashboard. These are uniquely generated for their island (Misland) - different every course run. Optional datasets are purchased with each team's in-simulation campaign budget, so students must decide which intelligence is worth the spend, when to invest a turn in primary research, and how to combine multiple sources into an actionable model.

Reinforces: data literacy, spreadsheet fluency, business intelligence thinking, data integration, information-acquisition decisions, segmentation
๐Ÿ“ฆ Dataset Ecosystem: 5 layers, 17+ named files
๐Ÿ›๏ธ Foundational (always available)
Registered Voters List ยท Household Address List ยท Area Profiles
๐Ÿ’ฐ Financial & Socioeconomic (purchasable with in-simulation budget)
House Mortgage List ยท Financial Well-Being Dataset ยท Vehicle Registry ยท Vocational Registry
๐Ÿ” Behavioral Proxy (purchasable with in-simulation budget)
Merchant Consumption Index ยท Utility Innovation Report ยท Citizen Service Logs
Spending patterns, utility behavior, and service-complaint signals act as revealed-preference proxies, closer to real analytics than survey answers alone.
๐Ÿ“ก Social Network (when microblogging enabled)
Microblogging User Base ยท Follower Network ยท Post Archive ยท Candidate Followers
Students identify high-reach influencers and model diffusion vs. direct outreach.
๐Ÿ”„ Recurring & Public-Signal
Independent Survey ยท Lawn Election Signs ยท Residents' Contributions
Data that changes turn-over-turn - students learn to distinguish stable baselines from evolving intelligence.
๐Ÿ” What students do in Excel
  • Filter residents by income bracket and district
  • Calculate cost-per-reach across target segments
  • Cross-reference support data with demographics
  • Identify high-value donor segments for fundraising
  • Use VLOOKUP / XLOOKUP to join datasets across files
  • Build pivot tables to compare segments by geography
  • Model ROI: expected support lift vs. action cost
๐Ÿ’ก Students make information decisions, not just campaign decisions
  • Which datasets are worth spending part of the team's simulation budget on?
  • Which segment offers the best expected ROI under limited resources?
  • Is it smarter to research this turn or act before a rival does?
  • Where is a competitor vulnerable enough to justify a targeted response?
Datasets include static files available from Turn 1, recurring files that update across turns, and intelligence files generated by survey and fundraising actions.
3
Submit Decisions + Written Rationale
~15 minutes

Students return to the platform and queue their actions for the upcoming turn. Because every turn happens inside a competitive landscape, teams must decide where scarce resources go, which segment promises the best bang for the buck, and whether to press an advantage or respond to a rival move. Each turn, a team can submit one offline action plus one optional online action if microblogging is enabled:

Reinforces: decision-making under constraints, competitive strategy, evidence-based reasoning, written justification
Offline Action (one per turn - choose one)
  • Campaign action - targeted outreach using uploaded address list or area selection, issue choice, optional negative/rival targeting where enabled
  • Survey action - gather fresh intelligence using a targeted resident list; returns issue preferences, support signals, and demographic breakdowns
  • Fundraising / contribution drive - raise campaign budget by targeting likely donors with financial capacity; uses uploaded donor list
Online Action (optional, when microblogging is enabled)
  • Regular post - message broadcast to existing followers in the social network
  • Promoted post - targeted to specific handles using an uploaded list; reaches beyond existing followers
Offline and online actions are separate slots - teams can do both in the same turn.
Action Plan (two-part, submitted with every action)
Strategic Approach (WHO + WHY)
Which segment to target, which dataset supports the choice, and why this action fits the current strategy. Up to 500 characters.
Expected Results (WHAT)
A concrete prediction: support lift, survey response rate, donation total, or reach/engagement depending on action type. Up to 500 characters.
Example (campaign): "Targeting East households earning $120K+ because financial well-being data shows healthcare alignment and income density 34% above island average. Expected: 3-4 point support lift in East District."
4
AI Evaluates Strategic Reasoning
Automatic - instructor reviews before release

After all teams submit, the platform evaluates each team's written rationale for analytical rigor - generating a score and qualitative feedback. The AI assesses reasoning quality, not just competitive outcome. You see every result before students do; you can annotate or override any feedback before releasing it. The AI supports your assessment - it does not replace your judgment.

Reinforces: structured analytical reasoning, evidence-backed justification, formative feedback literacy
Strong - 87/100

AI: District 3 concentration is well-supported - income density cited (34% above average) is accurate and aligns with your ROI objective. Avoiding District 7 based on engagement scores shows sound data use. Opportunity: cost-per-reach in rural segments is 2.4ร— urban average. A 15โ€“20% budget shift there could improve coverage without sacrificing efficiency.

Weak - 41/100

AI: Rationale does not cite any dataset to support District 5 selection. โ€œWe thought it would have the most supportersโ€ is not a data-driven justification. For Turn 4: open the demographic dataset, identify the metric that supports your target, and state it explicitly.

5
Students See Results & Standings
After turn deadline closes

When you release turn results, students review outputs for whatever action type they submitted - campaign results, survey findings, fundraising returns, or online engagement effects - alongside AI reasoning feedback and competitive standings. They can see which teams achieved the highest return on limited resources, which strategies drove lower cost and higher yield, and whether a response to competitor behavior actually paid off.

Reinforces: performance measurement, competitive benchmarking, ROI interpretation, results analysis
+847
Effectiveness score
$4.96
Cost per point
3,240
Residents reached
#2
Team standing
Survey actions return completed survey responses and demographic breakdowns. Fundraising actions return total raised, average gift, and net gain after action cost. Online actions return reach, reposts, and follower-growth effects when microblogging is enabled.
What You See as the Instructor
Real-time throughout the simulation

While the simulation runs, you have a real-time view of your section. This is what instructor-facing analytics looks like in practice:

๐Ÿ“Š Section + Team Analytics
View team performance trends, action effectiveness patterns, competitive-response choices, and data-use indicators across the section. Surface outliers and strong analytical approaches at a glance.
๐Ÿง‘โ€๐Ÿ’ป Participation and Engagement Signals
Active minutes per student, inactive student flags, participation balance inside teams, top-contributor and needs-attention indicators. Useful for managing free-rider risk and fairness concerns.
๐Ÿ“ฅ Grade Export (CSV)
Export a per-student grade file at any point in the simulation. Import directly into your gradebook or use in grading, AoL, or accreditation reporting workflows.

Steps 2โ€“5 repeat for each turn (4โ€“13 turns depending on your settings).

Each turn, students review their results, compare against competitors, and can refine their approach. The competitive standings give everyone a concrete reason to engage with the data.

Acquire data โ†’ Analyze & integrate โ†’ Form a hypothesis โ†’ Act โ†’ Measure results โ†’ Improve next turn
Built for assessment, not just engagement

For professors and program administrators, the platform is useful because the analytics can double as evidence of learning, not just a game scoreboard.

๐Ÿ“ Direct measure of analytical reasoning
AI-supported reasoning scores show whether students are citing data, making realistic predictions, and justifying their decisions clearly.
๐Ÿ“ค Exportable grading / AoL evidence
Per-student exports support gradebook import, AoL binders, and accreditation conversations without manual re-entry.
๐Ÿง‘โ€๐Ÿซ Instructor judgment stays central
You can review results before release, annotate feedback, and blend competitive performance, participation, and reasoning quality in your rubric.
Pedagogical Design
What Separates A Students from C Students

The simulation has a 4-tier analytical maturity ladder built into its mechanics. Students who reach Level 3 aren't just winning - they're demonstrating the analytical thinking employers actually hire for.

Level Name Behavior Competitive Outcome
0 Intuition-Based No data use; mass marketing approach Poor targeting, low ROI
1 Single-Variable Uses one dataset for broad trends Moderate effectiveness
2 Multi-Variable Cross-references multiple datasets Good targeting efficiency
3 "The Needle" Triangulates 3โ€“4 dataset layers into a micro-targeted hypothesis - joining financial, behavioral, and geographic signals Maximum competitive advantage
The leaderboard makes analytical quality visible in real time. Teams that guess get outcompeted by teams that analyze. That competitive pressure is what makes data work feel necessary rather than theoretical.
Flexible Enough for Any Course Format

Configure turns, modules, and activation timing to match your course level and learning goals.

โฑ๏ธ Turn count + pacing
Run a 4-turn unit or a 13-turn semester experience.
๐Ÿงฉ Modules + datasets
Enable the foundational experience or add fundraising, surveys, social media, and advanced dataset tiers.
๐Ÿ“ Grading emphasis
Blend reasoning quality, participation, and competitive performance the way your course requires.
๐ŸŒ Language + section setup
Configure section language, timing, and activation rules to match your delivery format.
๐Ÿ“˜ Intro MIS
4โ€“6 turns, standard module set, practice mode available for first-time users. Emphasis on data literacy fundamentals and structured decision-making workflow.
๐Ÿ’ป Business Computing
Emphasizes the full CSV-to-decision workflow: filtering, VLOOKUP/XLOOKUP, pivot tables, cost-per-reach modeling. AI reasoning scores provide a grading foundation for spreadsheet analysis competency. Students see exactly how spreadsheet skill translates into competitive results.
๐Ÿ”ฌ Advanced MIS / Business Analytics
6โ€“13 turns with expanded dataset tiers (financial, behavioral proxy, social network), multi-source data integration, and advanced instructor analytics. AI reasoning scores and grade exports support assessment and documentation.
๐ŸŽ“ Capstone / Integrative
All dataset layers active, debrief materials for deeper learning integration, and exportable per-student data to support grading and AoL documentation.

Start with a free sample section

Instructor access is free. Explore the full platform before you decide how to configure it for your course.

1
Create your free instructor account
No credit card. Takes about 2 minutes.
2
Explore a preconfigured sample section
See the full student and instructor experience before you set anything up.
3
Configure your course section
Set number of turns, modules, pacing, and language. Use a preset or customize.
4
Share the section code with students
Post it in your LMS, syllabus, or email. Students join in under 2 minutes.
Supporting materials included
Assignment templates, grading rubrics, sample discussion prompts, a student quick-start guide, and onboarding help are available once your instructor account is active.

Common questions from instructors

Each turn, students download one or more CSV datasets, analyze them in Excel or Google Sheets, decide how to allocate resources across target segments, upload a filtered list if using targeted outreach, and submit a written rationale explaining their choices. After the deadline closes, you release results and AI-supported feedback. Total weekly student time typically ranges from 15 minutes to over an hour depending on depth of analysis and module settings.

No. The simulation runs in any web browser. Students do their data analysis in whatever spreadsheet tool they already have - Excel, Google Sheets, OpenOffice, Tableau, or similar. Nothing new to install or configure.

You can grade on any combination of competitive performance, participation, and reasoning quality. The platform provides per-student data exports and AI-supported reasoning scores that you can use as grading inputs alongside your own rubric. Instructor materials include sample grading templates to help you get started quickly.

The AI evaluates the written rationale each student submits - specifically whether the reasoning cites relevant data, supports the targeting choice, and demonstrates analytical thinking. It produces a score and qualitative explanation. You see this before students do and can annotate, adjust the score, or add your own comments before releasing feedback. The AI supports your assessment process; final grading authority stays with you.

The simulation is configurable for different formats. You can run a focused 4-turn unit over a few weeks or expand to 13 turns across a full semester. Pacing, turn frequency, and module depth are all adjustable. Many instructors run it as a 4โ€“6 week segment within a longer course.

The framing works because students already know what a campaign is - so the rules and stakes are immediately intuitive. That familiarity lets the course focus on the analytical skills rather than spending time explaining a business context. Instructors typically frame it as: "We are using a campaign competition as the vehicle because it creates the exact pressure - budget limits, incomplete data, competing teams, performance accountability - that makes data analysis feel necessary rather than theoretical."

Basic spreadsheet comfort is enough - students who can open a CSV, apply a filter, and do simple arithmetic will be functional from Turn 1. More advanced students can go deeper with pivot tables, segmentation calculations, or cost modeling. The simulation meets students at their current skill level and gives more advanced ones room to differentiate.

Yes. The platform supports multiple languages and includes section-level language controls, so different sections of the same course can run in different languages. Contact us to confirm which languages are available for your specific use case.

Yes, when microblogging is enabled. The offline slot (campaign, survey, or fundraising) and the online slot (microblogging) are separate - teams can use both in the same turn. The exclusivity is inside the offline slot: a team can do a campaign action or a survey action or a fundraising action for that slot, but not more than one. The online slot is independent.

All three use the offline action slot - so choosing to run a survey or a fundraising drive means not running a campaign action that turn, which creates a real strategic tradeoff. A survey action targets a resident sample and returns fresh issue-preference and demographic intelligence - useful before a big spend. A fundraising / contribution drive targets likely donors and returns dollars raised, average gift amount, and net gain after action cost. Students must plan the Expected Results differently for each: support-lift predictions for campaigns, response-rate and insight expectations for surveys, dollar-return expectations for fundraising.

The Action Plan has two required fields, each with a 500-character limit. Strategic Approach asks students to explain who they are targeting and why - which dataset they used, what the data shows, and why this action makes sense for their current position. Expected Results asks for a concrete prediction specific to the action type: expected support change for campaign actions, expected survey response rate or insight for survey actions, expected dollars raised or net gain for fundraising actions, expected reach or engagement for online actions. That structured specificity is what the AI-supported feedback evaluates.

The simulation includes 17 or more named datasets across five categories: foundational (Registered Voters, Household Address List, Area Profiles), financial and socioeconomic (House Mortgage List, Financial Well-Being Dataset, Vehicle Registry, Vocational Registry), behavioral proxy (Merchant Consumption Index, Utility Innovation Report, Citizen Service Logs), social network (Microblogging User Base, Follower Network, Posts), and recurring public-signal datasets (Independent Survey, Lawn Signs, Residents' Contributions). Some are available from Turn 1 at no cost; others must be purchased using the team's in-simulation campaign budget. That purchase decision is itself a learning objective: students must weigh whether additional intelligence is worth the cost before acting. This is one of the clearest ways the simulation teaches information-systems thinking - students are not just consumers of a pre-given dashboard. They decide what information is worth acquiring.

After the section is configured, the weekly instructor workload is usually light: monitor submissions, review AI-supported reasoning feedback, and release results on your schedule. Many instructors use the exported grade data and built-in analytics to reduce manual grading time rather than add to it.

You can start with a free sample section, use the included assignment templates and grading rubrics, and request a walkthrough if you want help mapping the simulation to your syllabus. The goal is to let you evaluate the full instructor experience before you commit to a live section.

Ready to See It With Your Own Login?

A free instructor account gives you the full simulation experience - all screens, all data, all AI feedback - with no obligation.

Get Free Instructor Access