Mobile Security Basics: Protecting Apps and User Data

Meta Description: Mobile security threats: data leakage from insecure storage, weak authentication enabling hijacking, malicious apps, and network interception attacks.

Keywords: mobile security, app security, mobile data protection, secure mobile apps, authentication, encryption, mobile app security best practices, secure coding, data privacy, mobile security threats

Tags: #mobile-security #app-security #data-protection #cybersecurity #mobile-development


The breach that Cellebrite demonstrated in 2021 required no hacking skills from law enforcement. The Israeli digital forensics company's tools could extract data from a locked iPhone -- call logs, messages, photos, app data -- in hours. The target did not need to have been careless. They needed only to have an iPhone that ran software with unpatched vulnerabilities.

But Cellebrite's tools require physical access to the device. Most mobile security failures require nothing of the sort. In 2019, a security researcher named Elliot Anderson analyzed the top 200 apps in the Google Play Store and found that 43 of them -- more than one in five -- stored authentication tokens in SharedPreferences, Android's plaintext key-value storage that is accessible to any other app with the right permissions on a rooted device. These were not obscure apps. They included banking applications, healthcare apps, and one widely used government services application.

These developers had not made exotic cryptographic errors. They had used the wrong data storage class because no one had told them that SharedPreferences is not secure, because the documentation did not emphasize this prominently, and because the code worked correctly in testing. The tokens authenticated correctly. The app functioned as intended. And the user's credentials sat in a plaintext file on their device.

This is the reality of mobile security: most failures are not clever attacks against robust defenses. They are mundane oversights -- wrong storage mechanism, forgotten HTTPS enforcement, permissions that seemed harmless, API keys embedded in binaries that any researcher with jadx can extract in ten minutes. The good news is that avoiding these failures does not require being a cryptography expert. It requires knowing which patterns are dangerous and which are safe.


Why Mobile Security Is a Distinct Discipline

Mobile security overlaps substantially with web security -- injection attacks, authentication failures, and insecure data transmission threaten both -- but mobile introduces several threat vectors that have no direct web equivalent.

Physical possession is the fundamental risk. Web servers live in locked data centers with controlled access. Mobile devices go everywhere their users go: into bathrooms, onto bar tops, into taxis, and sometimes into the wrong hands. A stolen or lost device is physical access to every app's local data. This is not a hypothetical threat: the FBI estimates that a phone is stolen every 3.5 seconds in the United States.

App distribution creates delayed patching. When a web application has a vulnerability, the developer deploys a fix and it is active for all users within minutes. When a mobile app has a vulnerability, the developer submits an update to the App Store or Play Store, waits for review, and then depends on users to actually install the update. Vulnerable versions of mobile apps remain in production use for weeks or months after patches are published. This makes pre-deployment security substantially more important in mobile than in web development.

Third-party library risk is concentrated. Mobile apps routinely bundle dozens of third-party SDKs: analytics, advertising, crash reporting, authentication, payment processing. Each SDK is code from another organization running inside your app with your app's permissions. When an advertising SDK was found to contain spyware in 2020, it affected thousands of apps on Google Play that had no knowledge of the malicious code in their dependency tree.

Network environments are inherently hostile. Mobile users connect to coffee shop WiFi, hotel networks, airport connections, and corporate networks with varying security postures. Unlike enterprise web applications accessed over corporate VPNs, consumer mobile apps must function securely over networks that may be actively hostile.

The OWASP Mobile Top 10 provides the clearest taxonomy of where mobile apps actually fail:

Rank Vulnerability Common Manifestation
M1 Improper Platform Usage Storing credentials in UserDefaults instead of Keychain
M2 Insecure Data Storage Banking app logs account numbers to system debug log
M3 Insecure Communication Health app transmits records over HTTP
M4 Insecure Authentication Predictable session tokens, no token expiration
M5 Insufficient Cryptography MD5 for password hashing, ECB mode encryption
M6 Insecure Authorization User accesses other users' data by changing an ID parameter
M7 Client Code Quality Buffer overflow in image processing library
M8 Code Tampering Repackaged app with injected malware on third-party stores
M9 Reverse Engineering API keys extracted from decompiled APK
M10 Extraneous Functionality Debug endpoint active in production build

The through-line in this list is that most vulnerabilities are design failures rather than cryptographic failures. They are choices made during development that seemed reasonable in isolation and turned out to create security exposure.


Authentication: Getting Identity Right

Authentication is the gate that controls access to everything behind it. When authentication fails, no other security measure matters -- attackers have legitimate access to the user's data and can operate within the app's normal permission scope.

The Token Architecture

Modern mobile apps should never transmit user credentials on every request. The correct architecture uses tokens: short-lived artifacts that prove the user authenticated recently without exposing the actual credentials.

The standard flow:

  1. User provides credentials (username/password, social login, or biometric).
  2. Server validates credentials and issues two artifacts: an access token (short-lived, 15-60 minutes) and a refresh token (longer-lived, typically 7-30 days).
  3. App stores both tokens in secure hardware-backed storage (iOS Keychain, Android Keystore). Never in UserDefaults or SharedPreferences.
  4. API requests include the access token in the HTTP Authorization header as a Bearer token.
  5. When the access token expires, the app silently exchanges the refresh token for a new access token.
  6. On logout: both tokens are deleted from device storage AND invalidated server-side. Server-side invalidation is non-optional -- tokens that exist only client-side can be reassembled from backups.

Example: A healthcare app handling HIPAA-regulated patient data should configure access tokens to expire after 15 minutes of inactivity and refresh tokens to expire after 24 hours regardless of activity. A note-taking app with less sensitive content might use 60-minute access tokens and 30-day refresh tokens. The appropriate lifetime is a function of the sensitivity of the data protected.

Biometric Authentication

Face ID, Touch ID, and Android Fingerprint/Face authentication are among the highest-security, lowest-friction authentication mechanisms available. They should be the default authentication experience for returning users in any app handling sensitive data.

The key implementation insight: your application never sees the biometric data. The biometric sample is captured, processed, and compared entirely within the device's Secure Enclave (iOS) or Trusted Execution Environment (Android). Your app receives only a binary success/failure signal. There is no biometric data to leak.

When implementing biometric authentication to unlock a cryptographic key (the recommended pattern for unlocking secure storage), the key is bound to biometric authentication in the Secure Enclave/TEE. Decryption fails automatically if biometric authentication fails. This is hardware-enforced security rather than software security.

Always implement a fallback. Users with damaged fingerprint readers, face injuries, or iPhones that do not support Face ID must be able to authenticate. A strong PIN or password is the appropriate fallback. The fallback path should not be weaker than the primary path.

Multi-Factor Authentication for Sensitive Operations

For any operation where the consequences of unauthorized access are severe -- financial transfers, account changes, accessing medical records, changing authentication credentials -- single-factor authentication is insufficient regardless of how strong that factor is.

Time-based One-Time Passwords (TOTP) generated by authenticator applications (Google Authenticator, Authy, Microsoft Authenticator) are the most secure widely-available second factor. The shared secret never transits the network during authentication; only the derived 6-digit code does. TOTP is resistant to phishing (codes expire in 30 seconds and are specific to the site), SIM swapping (no SMS network involvement), and credential stuffing.

SMS codes are weaker but substantially better than no second factor. SIM swapping attacks -- convincing a carrier to transfer a victim's phone number to an attacker-controlled SIM -- can defeat SMS-based MFA. High-value targets (executives, cryptocurrency holders) have been successfully attacked through SIM swapping. For general consumer applications protecting moderate-value accounts, SMS MFA represents a reasonable security/convenience trade-off.

Push notifications sent to a trusted device (Duo, Okta Verify, your own implementation) provide good security with excellent user experience. The approval flow is intuitive: "Are you trying to log in to [app] from [city]? Approve / Deny." Failures require active denial rather than passive timeout, which catches phishing attempts where users might absent-mindedly approve requests they did not initiate.


Secure Data Storage: The Most Common Failure Mode

Storage vulnerabilities are responsible for more mobile data exposures than any other single category. The reason is simple: the insecure pattern is often easier to implement than the secure one, documentation emphasizes functionality over security, and storage failures are often invisible during testing.

Platform Secure Storage

iOS Keychain is the correct storage mechanism for any data that should survive app deletion, be protected by device authentication, or that represents a credential of any kind. The Keychain is hardware-encrypted, access-controlled by the device passcode and biometrics, and provides granular access control through Keychain access groups.

Store in the Keychain: authentication tokens, OAuth credentials, encryption keys, passwords (when local storage is necessary), API keys that identify this specific device or user.

The Keychain has performance characteristics that make it inappropriate for high-frequency read operations. Do not store frequently-changing data in the Keychain; store the encryption key there and encrypt your frequently-changing data before writing it to a database.

Android Keystore provides hardware-backed key storage on devices with supported hardware (most modern Android phones). The Android Keystore stores cryptographic keys rather than arbitrary data -- your app holds keys in the Keystore and uses them to encrypt data stored elsewhere. The EncryptedSharedPreferences class wraps SharedPreferences with Keystore-backed encryption, providing a convenient encrypted alternative to the insecure default.

What Never to Use for Sensitive Data

UserDefaults (iOS) / SharedPreferences (Android) store data as plaintext property lists or XML files in the app's sandbox. On jailbroken iOS devices or rooted Android devices, these files are directly accessible. They are also included in device backups, which may be stored in iCloud or Google Drive with varying encryption levels. If a user's iCloud backup is compromised, their UserDefaults data is exposed.

Example: A banking app that stores the user's last account balance in UserDefaults to display during loading is making a judgment call about the sensitivity of that data. A banking app that stores the user's authentication token in UserDefaults has made a catastrophic security error. The distinction matters.

Application logs are a frequently overlooked data leak vector. NSLog() on iOS and Log.d() on Android write to system logs that are accessible through Xcode, Android Studio, adb logcat, and -- on rooted devices -- by other applications. Search your entire codebase for log statements and verify that none of them include tokens, passwords, account numbers, personal information, or other sensitive data. This search should be automated in your CI/CD pipeline to prevent regression.

The clipboard is shared across all apps. Text automatically copied by your app (financial account numbers, one-time codes, passwords from a password manager) is accessible to any other app on the device. iOS 14 introduced notifications when apps read from the clipboard, which created user awareness of how frequently apps access clipboard contents. Minimize automatic clipboard operations with sensitive data.

Encryption at Rest for Local Databases

Applications that store user data locally -- for offline access, caching, or state persistence -- often use SQLite as their local database. SQLite databases are plaintext files. On a compromised device, they are fully readable by anyone with file system access.

SQLCipher is an open-source SQLite extension that provides 256-bit AES encryption for the entire database file. The encryption key can be stored in the iOS Keychain or Android Keystore, ensuring the database is only accessible when the user has authenticated. Core Data on iOS and Room on Android both support SQLCipher as an underlying storage engine.

Data minimization is the strongest storage security stance available: data you do not store cannot be compromised. Before implementing local storage for any sensitive data category, ask whether the user experience genuinely requires local storage or whether the app can fetch it from the server when needed. Offline access patterns, as discussed in offline-first design, should drive this decision.


Network Security: Protecting Data in Transit

Every network request your app makes travels through infrastructure you do not control and may be observed by intermediaries you cannot see. Proper network security ensures that observation yields nothing useful to an attacker.

HTTPS as the Unconditional Baseline

HTTPS (HTTP over TLS) is not a feature for sensitive operations -- it is the required baseline for all network communication in mobile apps. Both iOS and Android enforce this at the platform level: iOS App Transport Security (ATS) and Android Network Security Configuration both block plaintext HTTP connections by default in modern versions.

Development teams sometimes disable these protections to connect to development servers running self-signed certificates or to debug network traffic using proxy tools. These development-time exceptions are the source of production security failures when disabled protections are accidentally shipped. Use build configurations to ensure security policy overrides never reach distribution builds.

TLS 1.3 should be the minimum version where possible. TLS 1.0 and 1.1 have known weaknesses and have been deprecated by major browser vendors and standards bodies. Android requires TLS 1.2+ from API level 20+. iOS has required TLS 1.2+ through ATS since iOS 9.

Certificate Pinning for High-Security Applications

Standard HTTPS trusts any certificate signed by any Certificate Authority in the device's trust store. This is generally correct behavior -- it allows your app to work with valid certificates from any CA without special configuration -- but it means that a compromised or malicious CA could issue a fraudulent certificate for your domain, enabling a man-in-the-middle attack that breaks HTTPS.

Certificate pinning addresses this by configuring your app to trust only specific certificates or public keys, rather than the entire CA system. An attacker presenting a certificate from a compromised CA would be rejected by a pinned app even though the certificate appears valid to the OS.

Public key pinning is preferred over certificate pinning. Pinning the certificate's public key (a hash of the SubjectPublicKeyInfo structure) remains valid across certificate renewals as long as the same key pair is used. Pinning the certificate itself requires app updates every time the certificate renews (typically annually), creating operational complexity and user impact when certificate updates are delayed.

Include backup pins. If you pin a single public key and that key is compromised, you need to push an emergency app update -- and users who do not update are locked out of your app. Multiple pinned keys (primary plus one backup) provides recovery options without sacrificing security.

Certificate pinning is most appropriate for apps handling financial transactions, medical records, or other high-sensitivity data where the risk of MITM attack justifies the operational complexity. For typical consumer apps, ATS/Network Security Configuration provides adequate baseline protection.

API Keys and Secrets in Client Code

This is among the most common and most easily avoided security failures in mobile development: embedding API keys, database credentials, private signing keys, or service account passwords directly in application code.

Compiled mobile apps can be decompiled. Android APKs can be decoded with tools like jadx or apktool in minutes, restoring readable Java or Kotlin code. iOS binaries are harder to decompile but strings (including hardcoded API keys) can be extracted with tools like strings or frida without full decompilation. Security researchers routinely scan published apps for embedded credentials; services like GitGuardian and TruffleHog do this automatically at scale.

In 2023, a researcher scanning the top 5,000 Android apps in the Google Play Store found over 3,000 apps containing hardcoded secrets, including AWS credentials, Stripe API keys, Firebase database URLs, and private SSH keys.

The solution is architectural: backend services that require secret keys should be called server-side, not client-side. Your mobile app calls your backend API, which holds the secret key and calls the third-party service. The mobile app never sees the key. For services where direct client access is unavoidable, use short-lived tokens provisioned by your backend rather than long-lived API keys.


Permission Management: Practicing Least Privilege

Every platform permission your app requests is a security surface and a trust negotiation. Users who see your app requesting location, contacts, camera, microphone, and notifications simultaneously on first launch are correct to be suspicious.

The principle of least privilege -- request only what you need, only when you need it -- is both a security principle and a practical UX strategy. Apps that request permissions at contextually appropriate moments, with clear explanations of purpose, grant rates substantially higher than apps requesting permissions during onboarding with no context.

Platform-specific guidance:

iOS presents permission dialogs with a developer-provided usage description string that appears in the system dialog. This string is your explanation to the user. "This app would like to access your location" is the iOS template. "Location is used to show you weather for your current city without requiring you to type it" is a developer-provided explanation that meaningfully increases user willingness to grant.

iOS allows apps to request most permissions once. If the user denies a permission, the system dialog will not appear again. Subsequent requests from the app are silently denied. Apps must detect denial and guide users to the Settings app to change the permission if they later decide to grant it.

Android uses runtime permissions (since Android 6.0) requiring the same contextual permission request approach. Android additionally allows users to grant permissions for "one time" -- a single session grant that automatically expires -- or "only while using app" for location, offering finer-grained control.

Permissions that signal danger if not justified:

Location (always on / background) means your app is tracking the user's physical location continuously even when the app is closed. This is appropriate for navigation apps, fitness apps tracking outdoor workouts, and apps where real-time location is the core function. It is not appropriate for apps that claim they need it for "better recommendations."

Microphone access in apps without obvious audio recording functionality has been the basis for multiple data privacy scandals. Users who see microphone permission in a retail app or a flashlight app are right to be suspicious.

Contacts access provides the full contents of the user's contact book, including personal relationships, business contacts, phone numbers, and email addresses for every contact. This is highly sensitive social graph data. Social apps that need it to find existing users should use contact-matching approaches that never transmit raw contact data to servers.


Protecting Application Integrity

Mobile apps run on devices controlled by users, some of whom are adversarial. Jailbroken iOS devices and rooted Android devices bypass the operating system's security model, potentially exposing your app's files, traffic, and runtime state.

Detection and Response

Detecting jailbreaks and root access is an imperfect science -- detection methods are routinely bypassed by tools like Liberty Lite and Magisk Hide. A determined attacker will eventually circumvent detection. The goal is raising the cost of attack, not making attack impossible.

Common detection heuristics: checking for existence of files created by common jailbreak tools (Cydia on iOS, SuperSU on Android), attempting to write outside the app sandbox, detecting code signing violations, and checking for debugging symbols that should not be present in production builds.

Response to detected compromise should be proportional to the sensitivity of your data. A gaming app might silently disable a leaderboard (to prevent cheating). A banking app might refuse to operate at all and display an explanation. Healthcare apps handling HIPAA-regulated data should similarly refuse operation, as their compliance posture may depend on data protection guarantees that cannot be made on a compromised device.

Obfuscation and Binary Protection

ProGuard and R8 on Android rename classes, methods, and variables during compilation, producing binaries where "CustomerAuthenticationManager" becomes "a" and "validatePasswordComplexity()" becomes "b()". The resulting code functions identically but is substantially harder to understand when decompiled.

iOS Swift code does not have a direct equivalent of ProGuard (Swift code is compiled to native binary rather than bytecode, making decompilation harder but not impossible), but string obfuscation tools can protect hardcoded strings in the binary from trivial extraction.

The security value of obfuscation is real but bounded: it deters casual reverse engineering and makes automated scanning tools less effective, but it does not prevent analysis by determined researchers with appropriate tools. The practical benefit is making your app a less attractive target relative to competitors who have done nothing, and protecting proprietary business logic (pricing algorithms, recommendation logic) from trivial copying.


Security Testing: Finding Problems Before Attackers Do

Security testing in mobile development follows three complementary approaches: static analysis examines code without executing it, dynamic analysis tests behavior during execution, and adversarial testing attempts active exploitation.

Static Application Security Testing (SAST) tools analyze source code or compiled binaries for known vulnerability patterns. MobSF (Mobile Security Framework) is an open-source tool that performs comprehensive automated analysis of both Android APKs and iOS IPAs, identifying insecure API usage, hardcoded secrets, insecure configurations, and dozens of other vulnerability categories. Commercial tools like Checkmarx and Veracode provide deeper analysis with better false-positive filtering.

Integrating SAST into CI/CD pipelines -- running security scans automatically on every pull request -- catches regressions before they reach production. A security check that fails the build when hardcoded secrets are detected is more reliable than manual code review for catching this specific issue.

Dynamic Application Security Testing (DAST) tests the running app by observing its behavior. Network traffic interception using proxy tools (Burp Suite, Charles Proxy, mitmproxy) reveals what data the app transmits, in what format, and with what security controls. This testing commonly reveals cleartext transmission of sensitive data, session token exposure in URL parameters, and certificate validation bypasses that were implemented for development convenience and never removed.

Dependency scanning checks your app's third-party libraries against databases of known vulnerabilities. The OWASP Dependency-Check tool, Snyk, and GitHub Dependabot all maintain databases of CVEs affecting common mobile libraries. When a vulnerability is discovered in a library you use -- as happened with Log4Shell in 2021 and with multiple CocoaPods distribution vulnerabilities -- automated scanning immediately identifies affected projects.

Penetration testing by professional security engineers provides the most realistic assessment of your app's security posture. Security firms specializing in mobile application testing (NowSecure, Cure53, Praetorian) apply manual expert analysis to identify vulnerabilities that automated tools miss. The investment is significant ($20,000-$80,000 for comprehensive mobile pen test) but appropriate before major launches and periodically for apps handling sensitive data.


Mobile apps handling specific categories of data operate under legal frameworks that impose security requirements beyond good engineering practice.

GDPR (General Data Protection Regulation, EU, effective May 2018) requires lawful basis for processing personal data, explicit consent for optional data collection, right to erasure (users can demand their data be deleted), data portability, and breach notification within 72 hours. GDPR applies to any app with EU users regardless of where the developer is based.

HIPAA (Health Insurance Portability and Accountability Act, US) imposes strict requirements on apps handling Protected Health Information (PHI): technical safeguards including encryption and access controls, administrative safeguards including workforce training and security policies, physical safeguards for devices handling PHI, and breach notification requirements. Apps that qualify as "covered entities" or "business associates" under HIPAA cannot use many standard analytics platforms (which are not HIPAA-compliant) without special agreements.

PCI-DSS (Payment Card Industry Data Security Standard) governs handling of credit card data. The primary guidance for mobile apps is to never store cardholder data locally and to use tokenization through PCI-compliant payment processors (Stripe, Braintree, Square) rather than handling raw card numbers. Modern payment SDKs from compliant providers handle the cryptographic complexity and keep card data entirely within the processor's certified environment.

COPPA (Children's Online Privacy Protection Act, US) prohibits collecting personal information from children under 13 without verified parental consent. Apps that knowingly target children or that are likely to be used by children must implement age verification, restrict data collection, and obtain appropriate consent.

Understanding the broader context of privacy and security trade-offs helps frame compliance not as a legal checkbox exercise but as a genuine alignment of data handling practices with user interests.


Building Security as a Development Culture

Technical controls matter, but security culture determines whether those controls are applied consistently. Development teams under timeline pressure make expedient choices. "We'll fix that later" is how insecure storage patterns make it to production.

Secure defaults reduce the cognitive burden of security. Configure your development environment so that the easy path is the secure path. Use ESLint rules or SwiftLint rules that flag known insecure patterns. Use linter checks that detect UserDefaults usage with sensitive keys. Make the insecure choice require extra steps rather than the secure choice.

Security in code review means every code review includes a security perspective, not just functional correctness. Questions that belong in review: "Where can this input come from -- could it be malicious?" "Where is this data going to be stored?" "Is this network request going over HTTPS?" "Are we logging anything sensitive here?" A code review checklist that includes security items produces better outcomes than relying on reviewers to remember security considerations spontaneously.

Incident response planning before an incident occurs determines whether the response is effective. Organizations that have never planned their breach response discover during an actual breach that they do not have a list of affected users, do not know what data was exposed, do not have breach notification templates, and do not have a designated communication lead. GDPR's 72-hour breach notification requirement focuses attention on this planning.


References

Frequently Asked Questions

What are the main security threats facing mobile apps?

Major mobile security threats: (1) Data leakage—sensitive data exposed through insecure storage, logs, or transmission, (2) Insecure authentication—weak login mechanisms, session hijacking, (3) Man-in-the-middle attacks—intercepting network traffic on public WiFi, (4) Code tampering—reverse engineering, repackaging malicious versions, (5) Insecure data storage—plaintext passwords, unencrypted local databases, (6) Excessive permissions—requesting unnecessary access to device features, (7) Poor session management—tokens don't expire, weak session handling, (8) Client-side injection—SQL injection, XSS in web views, (9) Physical device access—unlocked phones, lost/stolen devices, (10) Malicious third-party libraries—compromised dependencies. Mobile-specific risks: apps run on devices you don't control, users connect to untrusted networks, device theft/loss common. Security must be built in from start—retrofitting is difficult and expensive.

How should mobile apps handle authentication and user sessions securely?

Authentication best practices: (1) Use platform capabilities—biometric authentication (Face ID, Touch ID, fingerprint), device credentials, (2) Never store passwords—use tokens, if must verify locally use secure hashing (bcrypt, Argon2), (3) Implement proper token management—short-lived access tokens, refresh tokens, secure storage in Keychain (iOS) or Keystore (Android), (4) Enable MFA when appropriate—especially for sensitive operations, (5) Enforce strong password policies—but balance security with usability. Session management: (1) Set reasonable timeouts—balance security with UX, (2) Invalidate tokens on logout—server-side and client-side, (3) Use secure transmission—HTTPS only, certificate pinning for high-security apps, (4) Detect anomalies—unusual locations, device changes, (5) Allow session revocation—user can end sessions remotely. Avoid: storing credentials in plain text, using weak hashing (MD5, SHA1), overly long session lifetimes, transmitting credentials in URLs. OAuth 2.0 and OpenID Connect are industry standards for good reason—use them.

What is secure data storage and how do you implement it in mobile apps?

Secure storage principles: (1) Never store sensitive data if not necessary—best security is not storing at all, (2) Use platform secure storage—iOS Keychain for credentials/tokens, Android Keystore, (3) Encrypt sensitive data—at rest and in transit, (4) Minimize data retention—delete when no longer needed, (5) Protect app sandbox—ensure files aren't world-readable. Implementation: (1) Keychain/Keystore—for passwords, tokens, encryption keys, (2) Encrypted databases—SQLCipher for encrypted SQLite, Realm encryption, (3) File encryption—encrypt files containing sensitive data, (4) Secure preferences—not SharedPreferences/UserDefaults for sensitive data, use encrypted alternatives. What to protect: authentication tokens, personal information (PII), financial data, health information, location data, user-generated content if private. What doesn't need encryption: public content, UI preferences, non-sensitive cached data. Common mistakes: storing API keys in code (use server-side instead), using device-specific encryption (breaks on device change), weak encryption algorithms. Test: try extracting data from rooted/jailbroken device—if accessible, not secure enough.

How do you secure network communications in mobile apps?

Network security essentials: (1) Use HTTPS exclusively—no HTTP except for public, non-sensitive content, (2) Implement certificate pinning—for high-security apps, prevents MITM even with compromised CA, (3) Validate certificates—don't disable SSL validation (common bad practice in development), (4) Use modern TLS—TLS 1.2+ minimum, prefer TLS 1.3, (5) Secure API design—authentication tokens in headers not URLs, rate limiting, input validation. Additional protections: (1) Encrypt sensitive payloads—even over HTTPS for extra protection, (2) Implement request signing—verify requests haven't been tampered with, (3) Use VPN when appropriate—for enterprise apps, (4) Detect unsafe networks—warn users on open WiFi, (5) Implement timeout and retry logic—graceful handling of network issues. API security: (1) Token-based auth—not credentials with each request, (2) Scope permissions—limit what tokens can access, (3) Rotate tokens—regular refresh, (4) Validate server-side—never trust client. Monitor: failed authentication attempts, unusual traffic patterns, API abuse. Public WiFi is especially dangerous—assume network is hostile.

What mobile app permissions should you request and how?

Permission principles: (1) Request minimum necessary—only what app truly needs, (2) Request at point of use—when context is clear, not on launch, (3) Explain why—users more likely to grant if they understand benefit, (4) Graceful degradation—app should work (with reduced functionality) if permission denied, (5) Respect denials—don't repeatedly ask if user said no. Critical permissions (need careful justification): location (continuous tracking), camera, microphone, contacts, photos, health data, notification access. Less sensitive: notifications, network state, vibration. Platform differences: iOS asks permission first time feature used, Android permissions at install (older) or runtime (newer). Best practices: (1) Show custom prompt explaining need before system prompt, (2) Offer alternative flows if permission denied, (3) Link to settings if user denied but later needs feature, (4) Audit permissions regularly—remove unused ones, (5) Follow principle of least privilege. Red flags: requesting location but not using it, requesting contacts for non-social app, requiring unnecessary permissions to function. Users increasingly permission-conscious—expect justification.

How do you protect mobile apps from reverse engineering and tampering?

App protection techniques: (1) Code obfuscation—make code harder to understand (ProGuard for Android, obfuscation tools for iOS), (2) Certificate pinning—prevent MITM even on compromised devices, (3) Jailbreak/root detection—detect compromised devices, limit functionality or warn users, (4) Binary protection—encrypt strings, anti-debugging measures, (5) Runtime application self-protection (RASP)—detect and respond to tampering at runtime. What to protect: (1) API keys and secrets—never hardcode, use server-side, (2) Proprietary algorithms—if they exist, (3) License verification—for paid apps, (4) Authentication logic—make sure it's server-side primarily. Reality check: determined attackers will reverse engineer anything—obfuscation slows them down but doesn't stop them. Real protection: (1) Server-side validation—never trust client, (2) Detect abnormal behavior—rate limiting, anomaly detection, (3) Defense in depth—multiple layers, (4) Monitor for piracy/fraud—detect and respond. Focus security on: protecting user data (most important), preventing abuse that costs you money, protecting competitive advantage. Don't: rely on client-side security for critical functions, waste effort protecting non-sensitive logic.

What security testing and compliance are necessary for mobile apps?

Security testing types: (1) Static analysis—scan code for vulnerabilities (SonarQube, Checkmarx), (2) Dynamic analysis—test running app for security issues, (3) Penetration testing—simulate attacks, especially before launch and major updates, (4) Dependency scanning—check third-party libraries for known vulnerabilities, (5) Manual code review—security-focused review of critical code. Test for: OWASP Mobile Top 10 vulnerabilities, injection attacks, authentication issues, session management, data leakage, insecure storage, poor cryptography. Compliance requirements: (1) GDPR—if European users, data protection and privacy, (2) CCPA—California residents, (3) HIPAA—if health data, (4) PCI-DSS—if handling payments, (5) COPPA—if children under 13, (6) App store policies—both Apple and Google have security requirements. Best practices: (1) Security throughout development—not just at end, (2) Automated scanning in CI/CD—catch issues early, (3) Regular security updates—patch vulnerabilities promptly, (4) Incident response plan—what to do if breach occurs, (5) Security training—ensure team understands risks. Document security measures—helpful for compliance and user trust.