Spark 2 Workbook Answers -
words = lines.flatMap(lambda line: line.split()) # optional cleaning cleaned = words.map(lambda w: w.lower().strip('.,!?"\'')) distinct_words = cleaned.distinct() count = distinct_words.count()
1. Pick a workbook question. 2. Follow the **Context → Code → Commentary** template above. 3. Run the code locally to verify it works. 4. Polish the write‑up, add the performance notes, and you’ll have a solid, original answer. spark 2 workbook answers
| Operation | PySpark | Scala | |-----------|---------|-------| | **Read CSV** | `spark.read.option("header","true").csv(path)` | `spark.read.option("header","true").csv(path)` | | **Write Parquet** | `df.write.parquet("out.parquet")` | `df.write.parquet("out.parquet")` | | **Cache** | `df.cache()` | `df.cache()` | | **Repartition** | `df.repartition(10)` | `df.repartition(10)` | | **Window** | `from pyspark.sql.window import Window` | `import org.apache.spark.sql.expressions.Window` | | **UDF** | `spark.udf.register("toUpper", lambda s: s.upper(), StringType())` | `udf((s: String) => s.toUpperCase, StringType)` | | **Streaming read** | `spark.readStream.format("socket")...` | `spark.readStream.format("socket")...` | | **Stop Spark** | `spark.stop()` | `spark.stop()` | words = lines
import requests
```python from pyspark import SparkContext Follow the **Context → Code → Commentary** template
print(f"Unique words: unique_word_count")
