Back to Features
Expression-Based Piping

Question Piping: Make Your Surveys Feel Like Conversations, Not Forms

Your survey tool probably has question piping. You can insert someone's name into a greeting, or reference which product they selected a few questions back. That's useful, and it makes surveys feel slightly more personal than the generic "How satisfied are you with the product?" approach.

But here's where most tools stop. What if you need to calculate annual hours from weekly input ("That's 780 hours per year"), show conditional text based on tenure, or create validation rules that adapt to previous answers? That's where basic variable piping ends and expression-based piping begins.

Not all piping systems are created equal. Understanding the progression from basic to advanced helps clarify what's possible and why most tools leave researchers wanting more.

The Three Levels of Piping

Not all piping systems are created equal. Understanding the progression from basic to advanced helps clarify what's possible and why most tools leave researchers wanting more.

LEVEL 1

Variable Piping

Everyone has this

Hi ${name}!
LEVEL 2

Expression Piping

Few tools offer this

${e:hours * 52}
LENSYM
LEVEL 3

Visual Builder

Power + usability

Ctrl+I in any field to open the expression builder

Level 1: Variable Piping (Everyone Has This)

This is the baseline. You reference a previous answer and insert it into later questions. The syntax usually looks something like ${firstName} or {product_name}, depending on the platform.

A respondent tells you their name is Sarah in Question 2. Question 5 renders as "Hi Sarah, how likely are you to recommend us to colleagues?" It's better than "Hi [NAME], how likely..." but it's essentially mail-merge for surveys. You're copying values from one place to another.

Most survey platforms offer this level of piping. It works fine for basic personalization: greetings, product names, selected options. The respondent sees that the survey remembers what they said. That's good. But it's limited.

You can't calculate with these values. You can't apply logic to them. You can't transform them in any meaningful way. If someone tells you they work 15 hours per week, you can show "15 hours" back to them, but you can't show "That's 780 hours per year" without either making them do the math or creating a hidden calculation question that clutters your survey structure.

Level 2: Expression Piping (The Power Layer)

This is where variable piping evolves into something more powerful. Instead of just inserting values, you perform operations on them. You write expressions (mathematical calculations, conditional logic, string manipulations) and the survey evaluates those expressions dynamically.

In Lensym's system, this looks like ${e:{weeklyHours} * 52}. The e: signals an expression. The {weeklyHours} references a previous question. The * 52 is the calculation. When a respondent answers "15 hours per week," the expression evaluates to 780, and they see "That's 780 hours per year" rendered in real-time.

This level unlocks capabilities that basic piping simply can't offer. You can calculate percentages, compare values, combine multiple variables with conditional logic, create dynamic thresholds for validation, and adapt question wording based on combinations of previous answers. The survey doesn't just remember; it thinks.

Here's the problem: very few tools offer expression piping at all. Some platforms have it buried in complex features with syntax that most users never discover. Others require you to write JavaScript in custom code blocks, which means only technical users can access the power. The capability exists, but it's not usable for most researchers.

Level 3: Visual Expression Builder (The Usability Layer)

This is where power becomes practical. A visual builder removes the need to memorize syntax, question IDs, or expression grammar. You type ${ in any text field, a contextual menu appears, and you build expressions through a guided interface.

Lensym's visual builder shows you all previous questions (you can't pipe forward because causality matters). You see question titles, data types, and hierarchical numbering. Click a question to insert a simple variable, or click "Expression" to open a nested editor where you can build calculations, add conditional logic, reference multiple variables, and preview how the expression will evaluate.

The builder validates in real-time. If you write an expression that references a non-existent question, you see an error before you publish. If your conditional syntax is malformed, the expression editor highlights the problem. If you're trying to do math on text data (which won't work unless it's a numeric string), the type checker catches it.

This layer is what makes expression piping accessible to researchers who don't code. The capability has always existed in theory (JavaScript can do anything), but wrapping it in a visual interface that understands survey structure, validates expressions, and provides contextual help transforms it from "technically possible" to "actually usable."

What Expression Piping Actually Enables

The gap between "I can insert a variable" and "I can build dynamic, intelligent surveys" is filled by six core capabilities. Each one solves a specific problem that basic piping can't touch.

Calculations and Numeric Operations

Basic piping can show you that someone entered "15" as their answer. Expression piping can show "That's 780 hours per year, nearly 20 full work weeks annually."

The difference matters more than you might think. When you're asking about time investment, budget allocation, or resource distribution, raw numbers often lack context. A respondent tells you they spend €10,000 monthly on research tools. Is that a lot? Depends entirely on their total budget. If you can immediately show "That's 15% of your reported annual budget," you've given them context to evaluate their own answer.

This happens constantly in surveys. You ask about weekly hours worked, monthly expenses, annual revenue, team size, project counts. Individually, these numbers are data points. But when you calculate relationships between them (hours per project, cost per team member, revenue per customer), you unlock follow-up questions that would be impossible without doing the math first.

Consider a budget validation question. Someone reports a €50,000 annual research budget. Later, you ask them to estimate spending across categories: €15,000 on tools, €25,000 on participant compensation, €12,000 on software subscriptions. Those numbers don't add up to €50,000. Without expression piping, they submit the survey, you discover the inconsistency during data cleaning, and you either have bad data or need to re-contact respondents.

With expression piping, you can show them the calculation in real-time: "Based on your estimates, you've allocated €52,000 (104% of your stated budget). Would you like to adjust these figures?" They catch their own error, correct it on the spot, and your data stays clean. You're not just personalizing the survey; you're improving data quality during collection, not cleanup.

Conditional Logic and Dynamic Text

Sometimes you need your question text itself to adapt based on previous answers. Not routing to different questions (that's branching logic), but changing the wording within a single question based on context.

An employee engagement survey needs to ask about professional development opportunities. But "professional development" means something different to a new hire versus a tenured employee. The new hire needs onboarding and initial skill-building. The tenured employee needs advanced training and leadership opportunities. You could create two separate questions with branching logic, but then you've doubled your question count and made your survey structure more complex.

Or you could write: "How satisfied are you with the ${e:{tenure} < 1 ? 'onboarding and initial training' : 'advanced professional development opportunities'} available to you?" One question. Different text depending on tenure. The respondent sees language tailored to their situation.

This gets more powerful when you combine multiple conditionals. A customer satisfaction survey for a SaaS product might need to adapt based on both plan type and usage duration. Free users who just signed up see different language than enterprise customers in their second year. You write: "As ${e:{planType} === 'Enterprise' ? 'an Enterprise customer' : 'a ' + {planType} + ' user'} who has been with us for ${v:months} months, how would you rate..."

The survey feels like it was written specifically for each respondent, because in a sense, it was. The conditional logic generates personalized text on the fly. Respondents don't see template syntax or generic placeholders. They see fluent, natural language that acknowledges their specific context.

Multi-Variable References and Rich Context

Basic piping typically handles one variable at a time. Expression piping lets you combine multiple previous answers into a single, richly contextualized question.

Imagine a research study collecting demographics. You ask about role (researcher, practitioner, student), field (healthcare, education, technology), and years of experience. Later in the survey, you want to ask domain-specific questions. Instead of generic prompts, you write: "As a ${v:role} in ${v:field} with ${v:years} years of experience, what challenges do you face when conducting user research?"

A respondent who indicated "researcher," "healthcare," and "8 years" sees: "As a researcher in healthcare with 8 years of experience, what challenges do you face when conducting user research?" Every part of that sentence is personalized. The question acknowledges exactly who they are and what they do. It's not "Tell us about your challenges"; it's "Given your specific professional context, tell us what you experience."

This level of personalization dramatically increases the quality of open-ended responses. When a question demonstrates that you understand the respondent's situation, they're more likely to provide detailed, thoughtful answers. Generic prompts get generic responses ("It's fine," "No major issues," "N/A"). Contextual prompts get specific insights because the respondent sees that you're asking about their actual experience, not a hypothetical scenario.

Dynamic Validation Rules

Most surveys have static validation: "Must be a number between 0 and 100," "Must be a valid email," "Required field." Expression piping makes validation contextual and relational.

You ask someone their total annual budget in Question 3. Later, in Question 8, you ask about their tools budget. The validation rule is: "Must be less than or equal to ${e:{totalBudget}}." If they try to enter a tools budget that exceeds their total budget, they get an error message that references their specific total: "Tools budget cannot exceed your total budget of $${v:totalBudget}."

This works for all kinds of relational validation. End date must be after start date. Maximum value must be greater than minimum value. Current year expenses can't exceed the annual budget they reported earlier. Part-time hours can't exceed their reported weekly work hours. The validation rules reference previous answers dynamically, catching logical inconsistencies before the data leaves the survey.

This is particularly valuable in complex surveys with interdependent questions. A grant application asks about total project cost, then breaks it down by category (personnel, equipment, travel). You can set validation on each category: must not exceed the total. Or you can create a final validation question: "Your category totals sum to ${e:{personnel} + {equipment} + {travel}}. This ${e:({personnel} + {equipment} + {travel}) === {totalCost} ? 'matches' : 'does not match'} your stated total of $${v:totalCost}."

Dynamic Answer Options

Expression piping doesn't just work in question text. It works in answer choices, too. This creates options that adapt to previous responses.

A budget allocation survey asks about total budget, then presents options for how to distribute it. Instead of generic percentages, the options show calculated amounts: "Increase tools budget to $${e:{totalBudget} * 0.20} (20% of total)," "Increase personnel to $${e:{totalBudget} * 0.65} (65% of total)," "Maintain current distribution."

The respondent sees actual dollar amounts calculated from their stated budget. They're not doing mental math to figure out what 20% means in their context. The survey does it for them, making the options clearer and the decision easier.

This also works for rating scales with dynamic labels. You ask someone how many hours they work weekly. Later, you present a satisfaction scale where the labels adapt: "1 - Too many hours (you work ${v:weeklyHours}, which is above average)" and "5 - Ideal workload." The scale interpretation is personalized to their specific situation.

Longitudinal Comparisons and Change Tracking

For studies that survey the same respondents over time, expression piping enables powerful comparison questions that would be nearly impossible otherwise.

Wave 1 asks about weekly exercise hours. Six months later, Wave 2 re-asks the same question, but now you can show context: "In our previous survey, you reported exercising ${v:wave1_exercise} hours per week. How many hours per week are you exercising now?"

The respondent sees their baseline answer, which triggers specific recall. They're not estimating from vague memory ("I think I said 8 hours?"). They see exactly what they reported and can accurately assess whether their behavior has changed.

After they answer, you can show the calculated change: "That's ${e:{wave2_exercise} > {wave1_exercise} ? 'an increase' : 'a decrease'} of ${e:abs({wave2_exercise} - {wave1_exercise})} hours per week, representing a ${e:abs(({wave2_exercise} - {wave1_exercise}) / {wave1_exercise} * 100)}% change from your baseline. What motivated this change?"

This follow-up question only makes sense because the survey calculated the change and determined whether it was positive or negative. The respondent sees their specific trajectory, not a generic "Did your exercise habits change?" but a precise, quantified statement about their actual behavior over time.

The Visual Expression Builder

Having powerful expression capabilities is one thing. Making them accessible to people who don't write code is another entirely. This is where Lensym's visual builder makes the difference between a feature that technically exists and a feature that researchers actually use. The builder integrates seamlessly with our visual graph editor, giving you both structural and content-level control.

The Problem with Power

Even in tools that support expression piping, the experience usually looks like this: You open documentation, find a reference page about "piped text syntax," discover you need to memorize question IDs (q_5c3a8f, q_2h9k1p), learn proprietary expression syntax that's sort of like Excel formulas but not quite, type everything manually into a text field, preview the survey to test if it works, discover a typo, go back and fix it, preview again. After fifteen minutes, you've successfully created ${pipe:q_5c3a8f * 52} and you're not entirely sure it's right until respondents actually take the survey.

This friction means most researchers don't bother. They stick with basic variable insertion because anything more complex requires too much cognitive overhead. The power exists in theory, but it's not accessible in practice.

How Lensym's Builder Works

You're editing a question title. You want to reference a previous answer. You type ${ and immediately, a contextual menu appears showing all previous questions in your survey. Not question IDs; actual question titles. Not in random order; in hierarchical sequence (Q1, Q2, Q3). Each one shows its question type badge so you know what kind of data you're working with (text, number, date, choice).

Click a question and you're presented with two options: insert as a simple variable, or build an expression. Choose "variable" and ${v:questionId} appears in your text. Choose "expression" and a nested editor opens.

The expression editor isn't a blank text box. It's a structured environment where you can insert variables (those {questionId} tokens), type operators (+, -, *, /, comparison operators, ternary conditionals), and see live validation as you build. The editor shows you the data type of each variable you reference. It validates your syntax in real-time, highlighting errors immediately. If you reference a question that doesn't exist, you see an error before you click save. If your ternary conditional is missing a colon, the editor tells you exactly what's wrong.

You don't need to remember question IDs because the visual browser shows you question titles. You don't need to memorize syntax because the editor guides you through it. You don't need to test in preview mode to know if your expression works because validation happens inline. When you close the expression editor, you see a badge showing the return type (number, string, boolean) so you know what the expression will produce.

This visual approach transforms expression piping from "technical feature for power users" to "accessible capability for any researcher who understands survey logic." You don't need to code. You need to understand what you want the survey to do ("show annual hours calculated from weekly hours") and the visual builder gives you the tools to express that intent without learning a programming language.

Where Piping Works (Everywhere)

One of the frustrating limitations of piping in most survey tools is that it only works in specific places. You can pipe into question titles, maybe descriptions if you're lucky, but that's it. Want to pipe into answer options? Not supported. Validation messages? Nope. Conditional logic? You'll need a workaround.

Lensym takes a different approach: if a field accepts text or numbers, it supports piping. This comprehensive coverage means you can build truly dynamic surveys without constantly hitting limitations.

Question titles and descriptions are the obvious use cases. This is where most of your personalization lives: adapting question text to reference previous answers, showing calculated values, using conditional language. Every question type supports piping in its title and description fields.

Answer options support piping across multiple choice, checkbox, dropdown, and ranking questions. This lets you create options that reference previous selections ("Increase ${v:selectedFeature} capabilities") or show calculated values ("Allocate €${e:{totalBudget} * 0.20} to tools"). The options adapt to each respondent.

Validation rules can reference previous answers to create relational validation. "Must be less than ${e:{totalBudget}}" ensures later questions don't exceed earlier constraints. This catches logical inconsistencies during data collection, not analysis.

Validation error messages can be personalized: "Tools budget cannot exceed your total budget of $${v:totalBudget}." The error message shows the specific value they entered earlier, making it immediately clear why validation failed.

Minimum and maximum values for numeric inputs, sliders, and rating scales can be dynamic. If Question 3 asks for total budget, Question 8's maximum value can be ${e:{totalBudget}}, preventing respondents from entering invalid amounts.

Scale labels (the text shown at the endpoints of sliders and rating scales) support piping, letting you create context-aware scales. "1 - Much less than your ${v:weeklyHours} hours/week" and "10 - Much more than ${v:weeklyHours}" personalize the rating interpretation.

Conditional logic expressions can use piping not just to reference values, but to calculate whether conditions are met. "Show Question 10 IF ${e:{monthlyExpenses} * 12 > {annualBudget}}" creates branching based on calculated comparisons, not just raw answer values.

This works across all 20+ question types Lensym supports. NPS, Likert, matrices, rankings, date inputs, country pickers, email fields; every type can contain piped variables and expressions, and every type's answers can be piped into later questions. No exceptions, no "this type doesn't support piping" limitations.

The Impact on Data Quality

Piping isn't just about respondent experience or making surveys feel more personal. It has measurable effects on data quality that matter for research outcomes.

Higher Completion Rates

When surveys feel personal and relevant, respondents are more likely to finish them. Survey fatigue doesn't come from length alone. It comes from surveys that feel like they're wasting your time, asking repetitive questions, or treating you like a data source rather than a person with specific experiences.

When each question acknowledges what you said previously and builds on it, the survey feels like a conversation with forward momentum. You're less likely to abandon something that's clearly paying attention to your answers. Generic questions get generic effort. Personalized questions get engaged respondents who feel their time is being respected.

More Thoughtful Open-Ended Responses

Text responses in surveys with piping are consistently longer and more detailed than those in generic surveys. The mechanism is straightforward: specificity in questions triggers specificity in answers.

A generic prompt like "What could be improved?" often yields brief, vague responses: "Nothing," "It's fine," "I don't know." When you're asked about "the product" generically, your mind searches for something to say but lacks a clear anchor point.

The same question with piping ("What could be improved about ${v:productName}?") triggers specific recall. When you're asked about "Product B" specifically (which you just told the survey you use), you recall your actual experience with Product B and respond accordingly. Respondents provide concrete examples, detailed explanations, and actionable feedback.

This is the difference between data you can't act on ("Nothing really") and data that drives product decisions ("The Product B export feature times out with large datasets, and the error message doesn't explain why; just says 'export failed'"). Specificity in questions yields specificity in answers.

Fewer Mid-Survey Dropouts

Most survey abandonment doesn't happen at the beginning or end; it happens in the middle, usually around the same point where respondents start to feel fatigued or annoyed.

The psychology is straightforward. When you answer Question 5 with specific information, and Question 7 asks you about "the product" without acknowledging what you just told them in Question 5, you think "Are they even reading my answers?" That doubt breaks trust, and once respondents don't trust that their effort matters, they have no reason to continue.

Personalized surveys that reference previous answers maintain that sense of engagement. The survey demonstrates it's listening, building on your responses, and treating your input as meaningful rather than interchangeable. This consistency keeps respondents engaged through to the end.

Self-Validation Catches Errors Early

One of the most practical benefits of expression piping shows up in surveys with interdependent numeric questions: budgets, time allocation, resource distribution, anything where multiple answers need to add up to a constrained total.

Without piping, a respondent estimates how they allocate their time across five activities, submits the survey, and you discover during analysis that their estimates total 110% or 65% of their workweek. You can't re-contact them to clarify. You either treat the data as suspect or try to normalize it somehow, neither of which is ideal.

With piping, you show them the calculation: "Based on your estimates, you've allocated ${e:{a1} + {a2} + {a3} + {a4} + {a5}} hours per week, which is ${e:(({a1}+{a2}+{a3}+{a4}+{a5})/{totalHours})*100}% of your ${v:totalHours}-hour work week. Does this seem accurate?"

They see the discrepancy immediately. If the math doesn't match their mental model, they go back and adjust. Your data stays clean because validation happens during collection, and the respondent corrects their own errors rather than you trying to fix them post hoc.

This pattern works for any scenario where you can calculate something from previous answers and show it back for confirmation. Monthly budgets summing to annual budgets. Component costs summing to total project costs. Daily time allocations summing to 24 hours. The survey does the math, shows the result, and lets respondents self-correct before submission.

Common Questions About Expression Piping

Can I pipe answers from previous surveys (Wave 1 to Wave 2 in longitudinal studies)?

Not yet within the piping system itself, though it's on our roadmap. Currently, piping works within a single survey session. For longitudinal studies, the workaround is to use pre-fill parameters in the survey link: pass Wave 1 data into Wave 2 as hidden fields, then pipe those hidden fields into your questions. It's an extra step, but it works until we build native cross-survey piping into the panel management features.

What happens if someone skips a question that's piped into later questions?

The piped variable renders as blank, or you can specify a fallback value. The syntax ${v:question3 || 'your previous answer'} means "show the value from question3, or if it's empty, show 'your previous answer' instead." This prevents awkward blank spaces in your question text when someone skips an optional question.

Can I use piping in validation rules?

Yes, and this is one of the most powerful applications. You can set a validation rule like "Must be less than or equal to ${e:{totalBudget}}" so that later questions validate against earlier answers dynamically. The validation error message can also use piping: "Value cannot exceed your stated budget of $${v:totalBudget}." This makes validation contextual rather than static.

Does piping work in all languages?

The piping syntax itself (${v:...} and ${e:...}) works universally. However, conditional expressions that generate language-specific text need to be written for each language. If you're building a multi-language survey and want ${e:{role} === 'Student' ? 'your studies' : 'your research'} to work in German, you'd write ${e:{role} === 'Student' ? 'dein Studium' : 'deine Forschung'} for the German version. The logic is the same; only the text strings differ.

Is there a performance cost to using lots of piping?

No. Piping evaluates client-side in real-time as respondents answer questions. There's no server round-trip or API call. Even surveys with 50+ piped expressions render instantly because the evaluation happens in the browser using native JavaScript. You're not taxing any backend systems; the respondent's device does the work.

Can I pipe into thank you pages or email notifications?

Not currently, but it's planned. Right now, piping works within the survey itself (questions, descriptions, validation, logic). Extending to post-survey communications like customized thank you messages based on how someone answered, or follow-up emails that reference their specific responses, is on the roadmap for future releases.

What question types can I pipe FROM and TO?

All of them, in both directions. Every question type produces a value that can be piped: text strings, numeric values, selected options, dates, arrays of selections. And every question type can contain piped variables in its title, description, answer options (where applicable), and validation. There are no limitations based on question type.

Does piping work in preview mode?

Yes, and this is important for testing. When you preview your survey, piping evaluates based on the answers you enter during the preview. This lets you test personalization logic before publishing. You can verify that expressions calculate correctly, conditional text renders as expected, and validation rules work with piped values.