Python Interview Questions
35+ Python interview questions with hidden answers, organized by topic. Click "Show Answer" to reveal. Perfect for a quick 15-minute revision before an interview.
Python Basics
Q: What is the difference between mutable and immutable objects in Python?
list, dict, set. Immutable objects cannot: int, float, str, tuple, frozenset. When you "modify" an immutable object, Python creates a new one. Immutability matters for: dict keys (must be immutable), thread safety, and preventing accidental changes.Q: What is the GIL (Global Interpreter Lock)? How does it affect multithreading?
multiprocessing for CPU-bound tasks (separate processes, each with its own GIL). (2) Threads still help for I/O-bound tasks (network, file) because the GIL is released during I/O. (3) Python 3.13+ has experimental free-threaded mode (no GIL).Q: What is the difference between a shallow copy and a deep copy?
copy.copy() or list.copy()) creates a new outer object but shares references to nested objects. Deep copy (copy.deepcopy()) recursively copies everything — the outer object and all nested objects.
import copy
original = [[1, 2], [3, 4]]
shallow = copy.copy(original)
shallow[0][0] = 99
print(original[0][0]) # 99 — nested list is shared!
deep = copy.deepcopy(original)
deep[0][0] = 0
print(original[0][0]) # 99 — fully independent
Q: What is the difference between is and ==?
== checks value equality (do the objects contain the same data?). is checks identity (are they the exact same object in memory?).
a = [1, 2, 3]
b = [1, 2, 3]
a == b # True — same values
a is b # False — different objects
c = a
a is c # True — same object
Use is only for singletons: x is None, x is True.
Q: When would you use a list vs a tuple?
Q: What are the key differences between Python 2 and Python 3?
print is a function in 3, a statement in 2. (2) / is true division in 3 (5/2 = 2.5), integer division in 2 (5/2 = 2). (3) Strings are Unicode by default in 3, bytes in 2. (4) range() returns an iterator in 3, a list in 2. (5) 3 has f-strings, dataclasses, async/await, walrus operator (:=), and type hints. All new projects should use Python 3.10+.Want deeper coverage? See Python Overview and Core Concepts.
Data Structures
Q: What is the time complexity of dict lookup? Why is it so fast?
key in dict is O(1) but item in list is O(n).Q: What is defaultdict and when would you use it?
defaultdict from the collections module auto-creates a default value when you access a missing key. defaultdict(int) starts missing keys at 0 (for counting). defaultdict(list) starts them as empty lists (for grouping).
from collections import defaultdict
groups = defaultdict(list)
for name, dept in [("Alice", "eng"), ("Bob", "sales")]:
groups[dept].append(name)
# No KeyError — keys auto-created
Q: List comprehension vs map() — which is better?
map() when you already have a named function: list(map(str, numbers)) is cleaner than [str(n) for n in numbers]. For complex transformations with conditions, comprehensions win: [x**2 for x in nums if x > 0].Q: Explain set operations and give a practical use case.
| (union), & (intersection), - (difference), ^ (symmetric difference). Practical use: finding common users between two systems.
system_a = {"alice", "bob", "charlie"}
system_b = {"bob", "diana", "charlie"}
in_both = system_a & system_b # {'bob', 'charlie'}
only_in_a = system_a - system_b # {'alice'}
in_either = system_a | system_b # all 4 users
Q: What is a deque and when should you use it instead of a list?
collections.deque (double-ended queue) provides O(1) append and pop from both ends. Lists are O(n) for left-side operations because all elements must shift. Use deque for: queues, sliding windows, BFS algorithms, and any case where you add/remove from the front.
from collections import deque
q = deque()
q.append("task1") # Right end: O(1)
q.appendleft("urgent") # Left end: O(1)
q.popleft() # "urgent" — O(1)
Q: When would you use a tuple as a dict key instead of a list?
cache = {(lat, lon): "New York"} or visits = {(user_id, date): count}.Deeper coverage: Data Structures Deep Dive
Functions
Q: Explain *args and **kwargs with an example.
*args collects extra positional arguments into a tuple. **kwargs collects extra keyword arguments into a dict.
def log(level, *args, **kwargs):
print(f"[{level}]", *args)
for key, val in kwargs.items():
print(f" {key}={val}")
log("INFO", "Server started", port=8080, host="0.0.0.0")
# [INFO] Server started
# port=8080
# host=0.0.0.0
Q: What is a closure? Give a practical example.
def make_multiplier(factor):
def multiply(x):
return x * factor # 'factor' is captured
return multiply
double = make_multiplier(2)
triple = make_multiplier(3)
double(5) # 10
triple(5) # 15
Closures enable factory functions, decorators, and callback patterns.
Q: Write a decorator that logs the arguments and return value of any function.
import functools
def log_calls(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
args_str = ", ".join(
[repr(a) for a in args] +
[f"{k}={v!r}" for k, v in kwargs.items()]
)
print(f"Calling {func.__name__}({args_str})")
result = func(*args, **kwargs)
print(f"{func.__name__} returned {result!r}")
return result
return wrapper
@log_calls
def add(a, b):
return a + b
add(2, 3)
# Calling add(2, 3)
# add returned 5
Q: What is a generator and how does it differ from a list?
yield. A list stores all values in memory at once. Generators are memory-efficient for large or infinite sequences.
# List: all in memory at once
squares_list = [x**2 for x in range(1_000_000)] # ~8MB
# Generator: one value at a time
squares_gen = (x**2 for x in range(1_000_000)) # ~100 bytes
next(squares_gen) # 0
next(squares_gen) # 1
Use generators when you process items one by one and don't need the full list in memory.
Q: When should you use a lambda function vs a regular def?
lambda for short, throwaway functions — typically as sort keys, map/filter callbacks, or inline conditions. Use def for anything that's reusable, needs a docstring, or has more than one expression. PEP 8 discourages assigning lambdas to variables — use def instead for named functions.Deeper coverage: Functions & Modules Deep Dive
Object-Oriented Programming
Q: What is the difference between __init__ and __new__?
__new__ creates the instance (allocates memory). __init__ initializes it (sets attributes). __new__ is called first and returns the new object, then __init__ receives it as self. You rarely override __new__ — it's needed for immutable types (since you can't modify them in __init__) and singleton patterns.
class Singleton:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
Q: When would you use inheritance vs composition?
# Composition (preferred)
class Car:
def __init__(self):
self.engine = Engine() # Car HAS an engine
# Inheritance
class ElectricCar(Car): # ElectricCar IS a Car
pass
Q: Name 5 dunder (magic) methods and what they do.
__init__(self)— Initialize instance attributes (constructor)__str__(self)— Human-readable string (print(obj))__repr__(self)— Developer-readable string (repr(obj), used in debugger)__len__(self)— Return length (len(obj))__eq__(self, other)— Equality check (obj == other)
Others: __lt__, __getitem__, __iter__, __enter__/__exit__ (context managers), __call__ (make object callable).
Q: What does @property do and why use it?
@property lets you access a method like an attribute, adding a getter (and optionally setter) without changing the calling code.
class Circle:
def __init__(self, radius):
self._radius = radius
@property
def area(self):
return 3.14159 * self._radius ** 2
@property
def radius(self):
return self._radius
@radius.setter
def radius(self, value):
if value < 0:
raise ValueError("Radius must be positive")
self._radius = value
c = Circle(5)
c.area # 78.54 — looks like an attribute
c.radius = 10 # Validated via setter
Use @property for computed attributes, validation, and encapsulation without breaking the public API.
Q: What are abstract classes? How do you create them in Python?
from abc import ABC, abstractmethod
class Shape(ABC):
@abstractmethod
def area(self):
pass
@abstractmethod
def perimeter(self):
pass
class Rectangle(Shape):
def __init__(self, w, h):
self.w, self.h = w, h
def area(self):
return self.w * self.h
def perimeter(self):
return 2 * (self.w + self.h)
# Shape() # TypeError: can't instantiate
Rectangle(3, 4) # Works — all abstract methods implemented
Want deeper coverage? See Core Concepts for OOP fundamentals.
Error Handling & File I/O
Q: Explain try, except, else, and finally.
try:
result = 10 / x # Code that might fail
except ZeroDivisionError:
print("Can't divide by zero") # Handle specific error
except (TypeError, ValueError) as e:
print(f"Bad input: {e}") # Handle multiple types
else:
print(f"Result: {result}") # Runs only if NO exception
finally:
print("Always runs") # Cleanup — always executes
else runs only if the try block succeeds (no exception). finally always runs — even if there's a return or exception. Use it for cleanup (closing files, releasing locks).
Q: What is a context manager? How does the with statement work?
with statement calls __enter__ on entry and __exit__ on exit (even if an exception occurs).
# File handling — file auto-closes
with open("data.txt", "r") as f:
content = f.read()
# f is automatically closed here, even if an error occurred
# Custom context manager
from contextlib import contextmanager
@contextmanager
def timer(label):
import time
start = time.perf_counter()
yield # Code inside 'with' runs here
elapsed = time.perf_counter() - start
print(f"{label}: {elapsed:.4f}s")
with timer("Processing"):
# your code here
pass
Q: Why should you use with open() instead of f = open()?
with statement guarantees the file is closed when the block exits, even if an exception occurs. Without with, if an exception happens between open() and f.close(), the file stays open (resource leak). You'd need a try/finally to match what with does automatically.Q: How do you create a custom exception?
class InsufficientFundsError(Exception):
"""Raised when a withdrawal exceeds the balance."""
def __init__(self, balance, amount):
self.balance = balance
self.amount = amount
super().__init__(
f"Cannot withdraw ${amount}. Balance: ${balance}"
)
class BankAccount:
def __init__(self, balance):
self.balance = balance
def withdraw(self, amount):
if amount > self.balance:
raise InsufficientFundsError(self.balance, amount)
self.balance -= amount
try:
account = BankAccount(100)
account.withdraw(150)
except InsufficientFundsError as e:
print(e) # Cannot withdraw $150. Balance: $100
Coding Challenges
Q: Reverse a string without using slicing or reversed().
# Iterative approach
def reverse_string(s):
result = []
for char in s:
result.insert(0, char)
return "".join(result)
# More efficient iterative
def reverse_string(s):
chars = list(s)
left, right = 0, len(chars) - 1
while left < right:
chars[left], chars[right] = chars[right], chars[left]
left += 1
right -= 1
return "".join(chars)
# Using reduce
from functools import reduce
def reverse_string(s):
return reduce(lambda acc, c: c + acc, s, "")
# The Pythonic way (for reference):
# s[::-1]
Q: Flatten a nested list of arbitrary depth.
# Recursive solution
def flatten(lst):
result = []
for item in lst:
if isinstance(item, list):
result.extend(flatten(item))
else:
result.append(item)
return result
flatten([1, [2, [3, [4]]], 5]) # [1, 2, 3, 4, 5]
# Generator version (memory efficient)
def flatten_gen(lst):
for item in lst:
if isinstance(item, list):
yield from flatten_gen(item)
else:
yield item
list(flatten_gen([1, [2, [3]], 4])) # [1, 2, 3, 4]
Q: Find all duplicate values in a list.
# Using Counter
from collections import Counter
def find_duplicates(lst):
counts = Counter(lst)
return [item for item, count in counts.items() if count > 1]
find_duplicates([1, 2, 3, 2, 4, 3, 5]) # [2, 3]
# Using set (O(n) time, O(n) space)
def find_duplicates(lst):
seen = set()
dupes = set()
for item in lst:
if item in seen:
dupes.add(item)
seen.add(item)
return list(dupes)
Q: Implement a cache (memoize) decorator from scratch.
import functools
def memoize(func):
"""Cache results of function calls."""
cache = {}
@functools.wraps(func)
def wrapper(*args, **kwargs):
# Create a hashable key from args and kwargs
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
wrapper.cache = cache # Expose cache for inspection
wrapper.cache_clear = lambda: cache.clear()
return wrapper
@memoize
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci(100) # Instant
len(fibonacci.cache) # 101 cached values
# In production, use functools.lru_cache instead
Q: Merge two sorted lists into one sorted list.
# Two-pointer approach — O(n+m) time, O(n+m) space
def merge_sorted(a, b):
result = []
i = j = 0
while i < len(a) and j < len(b):
if a[i] <= b[j]:
result.append(a[i])
i += 1
else:
result.append(b[j])
j += 1
# Append remaining elements
result.extend(a[i:])
result.extend(b[j:])
return result
merge_sorted([1, 3, 5], [2, 4, 6])
# [1, 2, 3, 4, 5, 6]
# One-liner (but O(n log n) — not optimal)
# sorted(a + b)