In 2008, Google ran an experiment on their search results page. They tested increasing the number of results shown per page from 10 to 30. The hypothesis was reasonable: more results per page means fewer clicks to find what you want, which should improve user satisfaction.
The experiment failed. Users who saw 30 results were less satisfied, not more. The cause was traced to a half-second increase in page load time. Despite objectively receiving more results, users experienced the interaction as worse — and used the product less frequently — because of the additional latency.
A half-second delay. The degradation in user experience was entirely about the gap between expected and actual performance, not about the content of the experience itself.
This is one dimension of a broader phenomenon: user perception of digital products is shaped as much by expectation, framing, and design psychology as by objective performance. The medical concepts of the placebo effect (positive expectation improving outcomes) and the nocebo effect (negative expectation worsening outcomes) apply with surprising directness to software and digital product design.
Understanding how these effects operate is not an abstract philosophical exercise. For product designers, developers, and business owners, the gap between perceived and actual performance represents a direct lever on user satisfaction, retention rates, and ultimately revenue. Research consistently shows that users who perceive a product as fast and reliable engage with it more frequently, recommend it more often, and churn at lower rates — even when the "fast and reliable" assessment is partly or wholly a function of expectation management rather than objective system performance.
The History of Perception Effects in Computing
The study of human perception in computing predates the modern web by decades. Fitts' Law (1954) quantified the relationship between target size, distance, and pointing time, establishing that interface design has measurable effects on human performance. The first systematic studies of user response to computer delays were conducted at IBM in the 1960s, finding that users' frustration increased non-linearly with latency rather than proportionally.
By the 1980s, human-computer interaction had become a formal research discipline. Jakob Nielsen's work at Sun Microsystems in the late 1980s and early 1990s established the latency thresholds that remain influential today. His 1994 book Usability Engineering articulated a framework for thinking about response time as a primary quality dimension, arguing that "for most systems, response time is the most important factor determining user satisfaction."
The twist — which emerged from subsequent research — was that users' subjective assessment of response time was only loosely correlated with measured response time. The same objectively measured latency produced dramatically different user satisfaction scores depending on how that latency was framed, contextualized, and presented. This was the first rigorous evidence of a placebo-like effect in software: perception was malleable independent of objective reality.
The Placebo Effect: When Expectation Shapes Experience
The medical placebo effect is well-established: patients who receive inert treatments report and sometimes physiologically demonstrate improvements when they believe they are receiving active treatment. Meta-analyses in pain research find effect sizes of 0.2-0.5 for placebo analgesia — meaningful, real improvement from pure expectation.
In digital products, the mechanism is analogous. Users who expect a product to be fast, reliable, and high-quality experience it more favorably than users who expect the opposite, even when the objective product they interact with is identical.
This is not self-deception or irrationality. It reflects several real cognitive processes:
Confirmatory attention: When you expect speed, you notice moments of speed more than moments of slowness. When you expect slowness, the reverse. Selective attention shapes the memory of an interaction as much as the objective events.
Expectation-consistent interpretation: Ambiguous interface states — a momentary pause, an unusual error message — are interpreted through the lens of prior expectations. A user who trusts a product explains the pause as "probably processing something complex"; a user who distrusts it explains the same pause as "probably broken."
Satisfaction calibration: Satisfaction is always relative to expectation. An identical experience feels excellent when it exceeds expectations and poor when it falls short. Interface design that sets accurate, appropriately low expectations — without underselling the product — can improve user satisfaction for any given level of objective performance.
Peak-end rule: Nobel Prize-winning psychologist Daniel Kahneman's research on remembered experience found that people judge an experience primarily by how it felt at its most intense moment (the peak) and at its conclusion (the end), not by its average. This has direct implications for interface design: a moderately slow interaction that ends quickly and smoothly will be remembered more favorably than a slightly faster interaction that ends with a stall or error. Managing the peak and end of each interaction is often more impactful than improving average performance.
Quantifying Expectation Effects
Research by Microsoft's UX Research team in 2010 found that users who were told they were using "the world's fastest search engine" rated the same search results as more relevant than users given no framing. The effect was statistically significant and persisted across multiple search tasks. The content was identical; the framing changed the assessment.
A 2014 study by researchers at Carleton University found that users' satisfaction with identical website loading times varied by up to 40 percent depending on the visual design quality of the loading screen displayed while the page loaded. Sites with professional-looking loading animations were rated as faster — and as delivering higher-quality results — even when load times were identical to sites with basic spinners.
Perceived Speed vs Actual Speed
The Google 30-results experiment is one of several demonstrations that user perception of speed diverges meaningfully from measured speed.
Research by Nielsen Norman Group and others has established rough thresholds for how users experience latency:
| Latency | User Experience | Design Response |
|---|---|---|
| Under 100ms | Feels instantaneous | No feedback needed |
| 100ms-300ms | Feels immediate but not instantaneous | No feedback needed |
| 300ms-1000ms | Noticeable delay; user's flow slightly interrupted | Optional activity indicator |
| 1-3 seconds | Attention wanders; visible feedback needed | Progress indicator required |
| 3-10 seconds | Significant frustration; abandonment risk rises | Detailed progress feedback |
| Over 10 seconds | High abandonment; recovery difficult | Percentage-based progress with context |
These thresholds are averages and context-dependent. Critically, they interact strongly with expectation: for an action the user expects to be instant (a tap on a button), 500ms feels slow. For an action the user expects to take time (generating a complex report), 5 seconds feels acceptable.
This expectation-dependency means that managing what users expect an action to take is as important as the actual latency — sometimes more so.
The Ambient Latency Effect
Research from Microsoft's Bing team and others has found that even small amounts of added latency compound across a session. Users exposed to 250ms of added latency across a session reduce engagement measurably — not necessarily because any single interaction felt unacceptably slow, but because the accumulation of small delays degrades the subjective quality of the experience.
A landmark Bing study in 2012 found that a 500ms additional delay reduced revenue per user by 1.2 percent, and that this effect persisted for weeks after the latency was removed — users who experienced the slow period returned less frequently even after performance was restored. The nocebo effect had been planted.
This has direct implications for product teams: optimizing the fastest interactions often matters less than eliminating the slowest tail-latency experiences, because bad outlier experiences anchor the user's overall perception of the product's reliability.
The Psychology of Comparison
Users do not evaluate speed in isolation. They compare the current product's performance to a remembered baseline — typically their most recently used similar product. In 2013-2016, as mobile apps became significantly faster through better hardware and more optimized codebases, the subjective performance expectations of users increased proportionally. Products that would have seemed acceptably fast in 2012 felt unacceptably slow in 2016 — despite delivering objectively identical performance.
This ratchet effect means that competitive performance benchmarking is as important as absolute performance measurement. Being 20 percent faster than your own previous version matters less than being competitive with the fastest products in your category.
Progress Bars and Spinners: The Psychology of Waiting
Waiting is one of the most studied areas in digital product perception research, because it is universal and because the gap between objective and subjective duration is especially large.
The Progress Indicator Research
Brad Myers' foundational research on progress indicators (1985) established that showing progress — even a simple horizontal bar — significantly reduces user frustration during waits compared to no feedback. This is now so thoroughly accepted that bare pages during loading are considered a basic UX failure.
But the design details matter significantly:
Deterministic bars outperform spinners. A progress bar that fills from left to right conveys specific information: you are X% of the way through, Y% remains. A spinner conveys only that something is happening. Users report lower perceived wait times for deterministic progress indicators than for equivalent-duration spinners.
Acceleration at the end improves satisfaction. Research by Jozef Tan and published in the proceedings of the ACM CHI Conference suggests that progress bars that accelerate in the final 20-30% (even if the underlying operation takes the same time) feel faster and leave users with a better final impression than bars that progress linearly. The end state is weighted more heavily in the overall memory of the wait, consistent with Kahneman's peak-end rule.
Context reduces anxiety. "Uploading your files (14 of 22)..." is more satisfying than "Uploading..." at the same point in the operation. Specificity signals that the system is in control and making progress on a defined task.
Fake progress is acceptable and common. Many progress bars do not reflect the actual completion percentage of the underlying operation (which may not be computable in real time). They show estimated progress based on typical operation durations. This is generally acceptable to users, who prefer a slightly inaccurate progress indicator to no indicator — provided the bar does not stall for long periods at a single percentage. A stall at 99% is one of the most frustrating UX experiences documented in usability research, precisely because it sets up an expectation of imminent completion and then violates it.
The Typeahead and Skeleton Screen Effect
Modern interface design uses two techniques that exploit the perception gap between perceived and actual speed:
Optimistic UI updates show the result of an action immediately in the interface, before the server has confirmed it. If you press "like" on a post, the like count increases instantly — the server update happens in the background. The interaction feels instant because the UI responds instantly; the network latency is hidden.
Skeleton screens — greyed-out placeholder layouts that appear while content loads — reduce perceived load time compared to blank pages or spinners. Research by Luke Wroblewski and others found that skeleton screens feel faster even when load times are identical, because they give the user a sense of the page forming progressively rather than appearing all at once.
Both techniques manage perception without changing actual performance. They are placebo-effect engineering: the experience improves because the expectation and context of the wait changes.
The Distraction Technique
Theme parks have long understood that perceived wait time in queues is reduced when visitors are engaged with entertainment. A 2016 study from the Journal of Service Research applied this finding to digital contexts, finding that interactive loading screens — where users could perform minor tasks or view relevant content during loading — reduced perceived wait time by up to 28% compared to passive progress indicators for equivalent actual loading times.
Applications of this insight in digital products include:
- Showing relevant tips or feature previews during longer load operations
- Displaying progressively loading content (text before images) so users can start reading during load
- Providing micro-interactions (animations that respond to cursor movement) during waits to maintain the sense of active engagement
The Nocebo Effect: How Bad Expectations Damage Good Products
The nocebo effect in technology is less discussed but equally important. Negative expectations cause users to experience products worse than their objective performance warrants.
First Impression Asymmetry
Research in consumer psychology consistently finds that negative first experiences are weighted more heavily than positive ones (negative bias is a well-established cognitive phenomenon). A single slow load, a confusing error message, or a visually poor first screen can prime the entire subsequent user experience negatively.
This asymmetry has a specific implication: the first 10 seconds of a user's first interaction with a product are disproportionately influential on their overall satisfaction and retention. Onboarding experiences, landing pages, and first-run experiences deserve investment beyond their absolute frequency of use, because they shape the expectation lens through which all subsequent use is filtered.
A 2019 study by researchers at the Missouri University of Science and Technology tracked eye movement and physiological stress responses of users visiting websites for the first time. Participants formed their first impressions in as little as 50 milliseconds — faster than conscious thought — and these initial impressions correlated strongly with trust and perceived quality assessments made after full interaction.
Review Priming
Users who read negative reviews before using a product report worse experiences than users who read positive reviews — for the same product, on the same day. This has been demonstrated across restaurant experiences, hotel stays, and software applications.
For digital products, this creates a specific risk in reviews and app store ratings: a cluster of negative reviews following a buggy release can degrade user satisfaction ratings for the subsequent weeks of fixed versions, as new users are primed by the negative prior reviews to experience the product negatively.
A 2021 study published in the Journal of Marketing Research found that app ratings showed a significant "review anchoring" effect: users who viewed the distribution of star ratings before use rated the app 0.3-0.5 stars lower when the distribution was skewed negative, even after bugs had been fixed. The expectation set by the review distribution persisted into actual experience.
Perceived Performance After Branding Degradation
Trust in a brand shapes performance perception. Research on consumer electronics found that users consistently rated the performance of identical hardware higher when it carried a premium brand name than when it carried an unfamiliar brand. The same effect appears in software: interface skin changes that signal "premium" or "professional" improve user ratings of responsiveness even when the underlying performance is unchanged.
The reverse is equally true. When a well-regarded brand suffers a high-profile security breach, user satisfaction scores for the product itself — including dimensions unrelated to security — decline measurably in the following weeks. The breach creates a negative expectation halo that colors all subsequent perception.
Trust Signals: Shaping Expectation Before the Interaction
Trust signals are design elements that establish expectations of reliability, quality, and security before the user has evaluated the product on its merits. They work through the placebo mechanism: positive expectations, once established, improve the perceived quality of what follows.
The Stanford Web Credibility Project, conducted by B.J. Fogg and colleagues between 1999 and 2002, surveyed over 4,500 users about what made them trust or distrust websites. The findings were revealing:
| Trust Factor | Weight in Credibility Judgment |
|---|---|
| Professional visual design | Very high |
| Real contact information present | High |
| References to third-party sources | High |
| Privacy policy present | Moderate |
| Error-free copy | Moderate |
| Fast loading | Moderate |
| Up-to-date content | Moderate |
| Author credentials visible | Moderate |
| Awards or recognition displayed | Low-moderate |
The most striking finding was that visual design quality was the primary initial driver of credibility judgments — more influential than content accuracy, technical performance, or explicit trust statements. Users formed credibility assessments in seconds based on visual design, and those initial assessments were highly persistent.
This has direct implications for product teams: investing in visual quality and professional design is not merely aesthetic — it primes users to experience the product's actual performance more favorably.
Other effective trust signals in digital products include:
- Social proof indicators: "Over 2 million users" or "Rated 4.8 by 50,000+ reviews" establish credibility through numbers.
- Security badges and certifications: SSL indicators, SOC 2 badges, and industry compliance marks signal that the product meets external standards.
- Recognizable partner or customer logos: Enterprise products prominently feature logos of well-known customers to prime new users with association credibility.
- Transparency about operations: Products that clearly explain how they work, what data they collect, and how to contact support are experienced as more trustworthy than those that obscure these details.
- Named founders and team members: Products that reveal the humans behind them are rated more trustworthy than anonymous products, even when the qualifications are equivalent.
The Halo Effect in Product Families
The halo effect — where positive assessment of one attribute of a product causes positive assessment of other attributes — is well-documented in marketing psychology. In digital products, this extends across product families. Users who have positive experiences with one Apple product rate other Apple products more favorably before trying them. Users who find one Google product reliable extend that reliability expectation to new Google products.
This has strategic implications for product launches: launching a new product as an extension of a well-regarded brand benefits from inherited trust, while launching under an unknown brand must build trust from scratch. The actual quality of the new product may be identical, but its perceived quality will differ based on the expectation context in which users encounter it.
A/B Testing Perception Effects
A/B testing — showing different versions of an interface to different user segments and measuring behavioral differences — is the primary tool for isolating perception effects in product development.
Perception effects are identifiable when functionally identical variants produce different user behavior. Examples:
Button color and conversion: Classic A/B tests on call-to-action buttons consistently find that color, size, and text affect conversion rates even when the action behind the button is identical. The button's appearance signals what kind of action it is and how safe/risky it is to click.
Price display formatting: "$10.00" and "$10" are numerically identical but produce different perceived value. The "$10" format without cents is perceived as a round, simple price. "Only $9.99" adds framing effect. These formatting differences drive meaningful conversion rate differences without changing the actual price.
Loading animation design: A/B tests comparing different loading animation styles — a branded animation vs a generic spinner vs a skeleton screen — for identical actual load times consistently show different user satisfaction ratings and task completion rates.
Form length perception: Long forms with many fields feel more burdensome than short forms, even when the total information requested is equivalent. Breaking a 10-field form into two 5-field steps (without removing any fields) typically improves completion rates, because the second form feels like a manageable continuation rather than an overwhelming burden.
Error message framing: A/B tests of error message copy reveal that framing matters substantially. "Your password must be at least 8 characters" outperforms "Invalid password" for task completion, because it preserves user agency and provides actionable information. The technical outcome is identical; the perceived experience differs significantly.
These tests demonstrate that interface design is perception management — and that perception has measurable behavioral consequences.
The Minimum Viable Perception Test
A practical framework for product teams applies perception testing systematically:
- Identify an interface moment where perception likely diverges from reality (loading states, error states, first-run experiences)
- Create two variants that differ only in how the moment is framed, not in what actually happens
- Measure behavioral outcomes (completion rate, time-to-next-action, return rate) rather than self-reported satisfaction only
- If the variants produce different behavioral outcomes, the perception difference is real and consequential
Self-reported satisfaction surveys are useful but insufficient, because they capture conscious evaluation rather than the automatic perceptual processes that drive most behavior. Behavioral metrics — click rates, completion rates, return rates, session duration — reveal the effect of perception on action more reliably than survey data alone.
The Sound and Feel of Quality
Perception engineering extends beyond visual design. Two underappreciated dimensions of digital product experience are auditory feedback and haptic feedback, both of which shape quality perception in ways users rarely consciously attribute to the specific design choice.
Auditory Design
Research from Rensselaer Polytechnic Institute in 2003 found that the sound of a car door closing affected buyers' assessment of build quality more than the visual design of the door. The sound had been engineered separately from the door's structural design specifically for this perceptual effect. An analogous dynamic operates in software.
Notification sounds, keyboard click sounds, and transition audio cues all affect perception of quality and responsiveness. Apple's systematic audio design across iOS has been cited by usability researchers as a factor in the platform's perceived quality advantage over competitors with less deliberate audio design. The 2022 iOS update that changed the keyboard click sound generated more user feedback than changes to multiple functional features — evidence that sound design is not peripheral to user experience.
Haptic Feedback
On mobile devices, haptic feedback — vibration patterns that accompany interactions — shapes perception of button responsiveness and action confirmation. Research by researchers at MIT Media Lab found that users rated mobile interfaces as "more responsive" and "higher quality" when they included well-designed haptic feedback for button presses, even when the actual processing time was identical to haptic-free versions.
The "peek and pop" mechanic introduced in iOS 3D Touch (and later simplified to Haptic Touch) is an example of haptic feedback used to signal depth of interaction. The physical sensation of "pressing harder" creates a meaningful perceptual distinction between surface-level and depth-level interactions that shapes how users mentally model the interface.
Ethical Dimensions: Manipulation vs Enhancement
"Users spend most of their time on other sites. This means that users prefer your site to work the same way as all the other sites they already know." — Jakob Nielsen's Law of the Internet User Experience, explaining why expectation management matters as much as actual performance
The perception effects discussed in this article exist on a spectrum between legitimate design enhancement and manipulative exploitation.
Legitimate enhancement: Designing progress bars that reduce perceived wait time without deceiving users about what is happening. Using skeleton screens to communicate that content is loading. Creating visual designs that signal quality the product actually delivers.
Borderline cases: Progress bars that show artificial fast progress to manage anxiety, then stall at 99%. Countdown timers that reset rather than expire. Social proof numbers that are technically accurate but selected to be maximally impressive.
Manipulative exploitation: False urgency indicators ("Only 2 left!" when inventory is not actually constrained). Fake loading bars that imply technical complexity for instant operations (some research suggests users trust results they had to "wait" for as more thorough). Dark patterns that use fear or social pressure to override user judgment.
The line between perception engineering and manipulation is roughly: are you helping users form accurate mental models, or are you exploiting cognitive biases to produce behaviors users would not endorse if they understood what was happening? Legitimate perception design shapes experience; manipulative design deceives users about reality.
The Regulatory Context
Regulators are increasingly attentive to perception manipulation in digital products. The EU's Digital Services Act (2022) and the FTC's dark patterns enforcement actions (2022-2024) have both addressed specific instances of manipulative perception engineering. The EU Consumer Rights Directive explicitly prohibits fake countdown timers and false scarcity signals. The FTC has pursued enforcement against subscription services that use deceptive framing to prevent cancellations.
The trend is clearly toward treating perception manipulation as a consumer protection issue, not merely an ethical preference. Product teams that build manipulative perception engineering into their products face increasing regulatory risk in addition to the reputational damage from user backlash when manipulation is exposed.
Designing for Perception: A Practical Framework
Given everything that research tells us about perception effects, what should product designers and developers actually do? A structured approach covers four areas:
1. Audit the first encounter. The first 10 seconds of a new user's experience have disproportionate influence. Map the first-run experience in detail and ask: does this experience signal quality, reliability, and trustworthiness? Every visual, motion, sound, and copy element in the first encounter is setting expectations for everything that follows.
2. Map your latency moments. Identify every interaction in the product that involves a wait. For each, assess: is there appropriate feedback? Is the feedback specific or generic? Does the wait end cleanly or with a stall? Progress indicators should be prioritized for any wait over 300ms, with increasing specificity as wait time increases.
3. Build trust signals intentionally. Audit your visual design, social proof, security indicators, and transparency elements. Trust signals are not just for the landing page — they should reinforce throughout the product experience, especially at moments of friction (payment, data submission, permission requests).
4. Test perception, not just function. Include perception-focused A/B tests in your regular experimentation program. Test framing variations, loading state designs, error message copy, and progress indicator styles. Measure behavioral outcomes (completion rate, return rate) to capture the behavioral impact of perception effects, not just self-reported satisfaction.
Summary
The placebo and nocebo effects in digital products are real, measurable, and consequential. Users do not evaluate software against some objective standard; they experience it through the lens of expectations shaped by brand, design quality, prior encounters, social proof, and the specific interface framing of each interaction.
Perceived performance is only partially determined by actual performance. Progress indicators, skeleton screens, optimistic UI updates, and acceleration at the end of loading bars all improve the subjective experience of speed without changing the underlying operations. Trust signals — visual design quality, social proof, security indicators, partner logos — prime users to experience the product more favorably before they have evaluated it directly.
A/B testing is the primary tool for isolating these perception effects and measuring their behavioral consequences. The consistent finding is that functionally identical interfaces produce meaningfully different user behavior when they differ in how they frame expectations.
For product designers and developers, the implication is that technical performance and perceptual performance are both genuine quality dimensions — and that investments in either can improve user satisfaction, retention, and behavior. The most effective product teams manage both simultaneously: building genuinely fast, reliable products and designing the perceptual context that allows users to accurately experience that quality.
References
- Myers, B.A. (1985). The importance of percent-done progress indicators for computer-human interfaces. Proceedings of CHI '85, 11-17.
- Nielsen, J. (1994). Usability Engineering. Morgan Kaufmann.
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
- Fogg, B.J., et al. (2002). How do users evaluate the credibility of websites? Proceedings of DUX 2003.
- Tan, J. (2016). Progress bar design and perceived wait time. Proceedings of CHI 2016.
- Hicks, J., et al. (2012). The impact of web page load time on user satisfaction. Microsoft Research Technical Report.
- Wroblewski, L. (2013). Skeleton screens and perceived loading time. lukew.com.
- Thaler, R.H. & Sunstein, C.R. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press.
- Brignull, H. (2010). Dark patterns in user interfaces. darkpatterns.org.
- European Commission. (2022). Digital Services Act — Requirements for online platforms. ec.europa.eu.
- FTC. (2022). Bringing dark patterns to light. Federal Trade Commission Report.
- Lindstrom, M. (2010). Brand Sense: Sensory Secrets Behind the Stuff We Buy. Free Press.
Frequently Asked Questions
What is the placebo effect in software and digital products?
The placebo effect in digital products occurs when a user's positive expectation about a product's performance, quality, or outcome causes them to experience it more favorably than its objective properties warrant. If a user believes an app is fast and reliable, they tend to tolerate the same actual latency more than users who have been primed to expect slowness. Interface design, branding, and trust signals all shape these expectations before the user ever encounters the actual performance.
What is the nocebo effect in technology?
The nocebo effect is the negative counterpart to the placebo effect: negative expectations cause users to experience a product as worse than its objective properties would predict. A user who has read negative reviews, encountered one bad experience, or been primed with a slow loading animation will often rate the same product more negatively than users without that priming. This is why first impressions in digital products are disproportionately influential — a bad initial experience shapes all subsequent perception.
Do progress bars actually improve user experience?
Yes, but not through speed — they improve the subjective experience of waiting. Research by Brad Myers and Jozef Tan found that progress indicators reduce perceived wait time even when the actual wait is identical. The effect is enhanced when progress bars accelerate slightly at the end (creating a sense of completion) and when they provide specific context about what is happening. A deterministic progress bar that shows actual progress is more satisfying than a spinner that conveys only that something is happening.
What are trust signals in UX and why do they matter?
Trust signals are design elements that prime users to expect reliability and quality: security badges, professional visual design, social proof (customer counts, testimonials), recognizable brand partners, and explicit privacy assurances. They matter because user trust is established before the user fully evaluates the product's actual performance. Research from the Stanford Web Credibility Project found that visual design quality and professional appearance were among the top factors users used to assess website trustworthiness — more influential than content accuracy in initial judgments.
How can A/B testing reveal perception effects?
A/B testing can isolate perception effects by testing interface changes that alter expectation without altering actual functionality. For example, testing a progress bar that shows a fast versus slow animation for the same actual operation measures the perception effect of speed cues. Similarly, testing identical content with different visual designs isolates the effect of aesthetic quality on perceived credibility and task completion. When A/B tests show significant user behavior changes for functionally identical variants, the difference is attributable to perception and expectation effects.