We Offer 100% Job Guarantee Courses (Any Degree / Diploma Candidates / Year GAP / Non-IT / Any Passed Outs). Placement Records
Hire Talent (HR):+91-9707 240 250

Interview Questions

Apache Spark Interview Questions and Answers

Apache Spark Interview Questions and Answers

Apache Spark Interview Questions and Answers

Here are the list of most frequently asked Spark Interview Questions and Answers in technical interviews. These Apache Spark questions and answers are suitable for both fresher’s and experienced professionals at any level. The questions are for intermediate to somewhat advanced Apache Spark professionals, but even if you are just a beginner or fresher you should be able to understand the answers and explanations here we give. These Apache Spark Interview Questions and Answers will guide you to clear all Interviews.

Best Apache Spark Interview questions and Answers

Besant Technologies supports the students by providing Spark interview questions and answers for the job placements and job purposes. We provide Apache Spark online training also for all students around the world through the Gangboard medium. These are top interview questions and answers, prepared by our institute experienced trainers. Stay tune we will update New Apache Spark Interview questions with Answers Frequently. If you want to learn Apache Spark Practical then please go through this Apache Spark Training in Chennai

Q1) Various ways to create contexts in spark ?
  • Sparkconext
  • Sqlcontext
  • Sparksession
  • Sqlcontext.sparkcontext
Q2) Difference between map and flatmap?

a. Map – one input row to one output row
b. Flatmap – one input row to multiple output rows

Q3) Repartition and coalesce difference?

a. Using repartition spark can increase/decrease number of partitions of data.
b. Using coalesce spark only can reduce the number of partitions of input data
c. Reparition is not efficient than coalesce.

Q4) How to create a stream in spark

a. Dstream
b. Structured stream.
c. DirectStream

Q5) how to handle data shuffle in spark?

Using map partition and foreachpartition to replace all the collect methods in spark.

Q6) what are all the file formats supported by spark ?

Avro, parquest, json, xml, csv, tsv, snappy, orc, rc are the file formats supported by spark.
Raw files as well as the structured file formats also supported by spark for efficient reading.

Q7) What are all the internal daemons used in spark?

ACLs, BlockManager, Memestore, DAGScheduler, SparkContext, Driver, Worker,Executor, Tasks.

Q8) What is SPARK UI how to monitor a spark job?

Jobs- to view all the spark jobs
Stages- to check the DAGs in spark
Storages- to check all the cached RDDs
Streaming- to check the cached RDDs
Spark history server- to check all the logs of finished spark jobs.

Q9) Cluster manager in spark?

Standalone
YARN-client and YARN-cluster (efficient for master-slave architecture)
MESOS (Efficient for master master architecture container orchestration)
KUBERNETES(container orchestration)

Q10) how to submit a spark job?

Using spark-submit and just follow the following program?
spark-submit –class org.apache.spark.examples.ClassJobName –master yarn –deploy-mode client –driver-memory 4g –num-executors 2 –executor-memory 2g –executor-cores 10
in the above sample
–master is a cluster manager
driver-memory is the actual memory size of the driver
executor-memory is the actual memory size of the executor
–num-executors is the total number of executors which are running at the worker nodes.
–executor-cores number of individual processes that the executor memory can take up.

Q11) what is the difference between dataframe and dataset?

Dataframe is untyped (throw an exception at runtime in case of any error in the schema mismatch)
Dataset is typed(throw an exception at compile time in case of any error in the schema mismatch)

Q12) what are all the memory tuning parameters and how to achieve parallelism in spark?

a. leverage the Tungsten engine.
b. spark job execution plan analysis.
c. caching and data broadcasting and accumulating the data using multiple optimization techniques in spark.

Q13) spark history server how to start?

./sbin/start-history-server.sh –properties-file history.properties
Once you successfully start this server then you can check all the logs of all the containers in spark jobs.

Q14) how to join two dataframes in spark?

Df1.join(df2).where(df1.col1==df2.col1).where(df1.col1==df2.col1)

Q15) what is udfs and how to use it ?

UDFs are user defined functions and in which are used to make a certain changes across all the rows in a specific columns like timestamp to day conversion, timestamp to week conversion.

Q16) code sample to read a data from text file?

from pyspark import SparkContext
SparkContext.stop(sc)
sc = SparkContext(“local”,”besant”) sqlContext = SQLContext(sc)
sc.textFile(filename)

Q17) code sample to read a data from mysql ?

spark.read.format(‘jdbc’).options(driver=’com.mysql.jdbc.Driver’,url=”””jdbc:mysql://<host>:3306/<>db?user=<usr>&password=<pass>”””,dbtable=’besant’,numPartitions=4 ).load()

Q18) What is Spark?

Start is a parallel information preparing structure. It permits to grow quick, brought together huge information application consolidate cluster, gushing and intuitive examination.

Q19) Why Spark?

Start is the third era circulated information preparing stage. It’s brought together huge information answer for every single enormous datum handling issues, for example, bunch , collaborating, gushing preparing. So it can ease numerous enormous information issues.

Q20) what is RDD?

Start’s essential center deliberation is called Resilient Distributed Datasets. RDD is an accumulation of apportioned information that fulfills these properties. Unchanging, conveyed, apathetically assessed, catchable are normal RDD properties.

Q21) What is Immutable?

Once made and relegate an esteem, it’s unrealistic to change, this property is called Immutability. Start is of course unchanging, it doesn’t permit updates and alterations. It would be ideal if you note information gathering isn’t unchanging, however information esteem is permanent.

Q22) What is Distributed?

RDD can consequently the information is appropriated crosswise over various parallel processing hubs.

Q23) What is Lazy assessed?

In the event that you execute a pack of projects, it’s not obligatory to assess instantly. Particularly in Transformations, this Laziness is a trigger.

Q24) What is Catchable?

Keep every one of the information in-memory for calculation, instead of heading off to the circle. So Spark can get the information multiple times quicker than Hadoop.

Q25) what is Spark motor duty?

Start in charge of planning, conveying, and checking the application over the group.

Q26) What are normal Spark Ecosystems?
  • Spark SQL(Shark) for SQL designers,
  • Spark Streaming for spilling information,
  • MLLib for machine learning calculations,
  • GraphX for Graph calculation,
  • SparkR to run R on Spark motor,
  • BlinkDB empowering intelligent inquiries over gigantic information are normal Spark biological systems. GraphX, SparkR, and BlinkDB are in the brooding stage.
Q27) What is Partitions?

Parcel is a consistent division of the information, this thought got from Map-diminish (split). Consistent information explicitly inferred to process the information. Little lumps of information additionally it can bolster adaptability and accelerate the procedure. Information, moderate information, and yield information everything is Partitioned RDD.

Q28) How start parcel the information?

Start utilize outline API to do the segment the information. In Input design we can make number of allotments. As a matter of course HDFS square size is segment estimate (for best execution), however its conceivable to change parcel measure like Split.

Q29) How Spark store the information?

Start is a preparing motor, there is no capacity motor. It can recover information from any capacity motor like HDFS, S3 and other information assets.

Q30) is it compulsory to begin Hadoop to run start application?

No not compulsory, but rather there is no different stockpiling in Spark, so it utilize nearby record framework to store the information. You can stack information from neighborhood framework and process it, Hadoop or HDFS isn’t required to run start application.

Q31) what is Spark Context?

At the point when a software engineer makes a RDDs, Spark Context interface with the Spark group to make another Spark Context protest. Start Context advise start how to get to the bunch. SparkConf is key factor to make software engineer application.

Q32) What is Spark Core functionalities?

Start Core is a base motor of apache start structure. Memory the executives, blame tolarance, planning and observing occupations, associating with store frameworks are essential functionalities of Spark.

Q33) How SparkSQL is not the same as HQL and SQL?

SparkSQL is a unique segment on the sparkCore motor that help SQL and HiveQueryLanguage without changing any sentence structure. It’s conceivable to join SQL table and HQL table.

Q34) When did we utilize Spark Streaming?

Start Streaming is a continuous handling of gushing information API. Start gushing assemble spilling information from various assets like web server log records, internet based life information, securities exchange information or Hadoop biological systems like Flume, and Kafka.

Q35) How Spark Streaming API works?

Software engineer set an explicit time in the design, with in this time how much information gets into the Spark, that information isolates as a bunch. The information stream (DStream) goes into start gushing. System separates into little pieces called groups, at that point encourages into the start motor for preparing. Start Streaming API passes that clumps profoundly motor. Center motor can create the last outcomes through spilling bunches. The yield likewise as groups. It can permits gushing information and clump information for handling.

Q36) What is Spark MLlib?

Mahout is a machine learning library for Hadoop, likewise MLlib is a Spark library. MetLib gives distinctive calculations, that calculations scale out on the bunch for information handling. The vast majority of the information researchers utilize this MLlib library.

Q37) What is GraphX?

GraphX is a Spark API for controlling Graphs and accumulations. It brings together ETL, different investigation, and iterative chart calculation. It’s quickest chart framework, gives adaptation to non-critical failure and usability without uncommon abilities.

Q38) What is File System API?

FS API can scrutinize data from different limit contraptions like HDFS, S3 or adjacent FileSystem. Begin uses FS API to scrutinize data from different limit engines.

Q39) Why Partitions are constant?

Each change creates new portion. Distributions use HDFS API so fragment is perpetual, flowed and adjustment to inner disappointment. Portion moreover aware of data region.

Q40) What is Transformation in begin?

Begin gives two one of a kind exercises on RDDs called changes and Actions. Change seeks after lazy assignment and short lived hold the data until the point that with the exception of whenever called the Action. Each change makes/return new RDD. Instance of changes: Map, flatMap, groupByKey, reduceByKey, channel, co-gathering, join, sortByKey, Union, specific, precedent are fundamental begin changes.

Q41) What is Action in Spark?

Exercises are RDD’s undertaking, that regard returns back to the battle driver programs, which kick off work to execute on a group. Change’s yield is a commitment of Actions. decrease, accumulate, takeSample, take, first, saveAsTextfile, saveAsSequenceFile, countByKey, foreach are ordinary exercises in Apache begin.

Q42) What is RDD Lineage?

Family is a RDD technique to reproduce lost bundles. Begin not reproduce the data in memory, if data lost, Rdd use linege to patch up lost data.Each RDD reviews how the RDD function from various datasets.

Q43) What is Map and flatMap in Spark?

The guide is an unequivocal line or line to process that data. In FlatMap every data thing can be mapped to different yield things (so the limit ought to reestablish a Seq instead of a singular thing). So most a great part of the time used to return Array segments.

Q44) What are conveyed factors?

Convey factors let programming engineer keep a read-simply factor held on each machine, rather than conveyance a copy of it with assignments. Begin supports 2 sorts of shared variables called convey factors (like Hadoop flowed store) and aggregators (like Hadoop counters). Impart factors set away as Array Buffers, which sends read-only characteristics to work center points.

Q45) What are Accumulators in Spark?

Beginning of-line debuggers called gatherers. Begin aggregators resemble Hadoop counters, to check the amount of events and what’s happening in the midst of business you can use authorities. Simply the driver program can examine a gatherer regard, not the assignments.

Q46) How RDD hang on the data?

There are two strategies to bear the data, for instance, hang on() to drive forward forever and hold() to proceed quickly in the memory. Unmistakable limit level decisions there, for instance, MEMORY_ONLY, MEMORY_AND_DISK, DISK_ONLY and some more. Both endure() and hold() uses assorted choices depends upon the task.

Q47) When do you use apache begin? Or on the other hand What are the benefits of Spark over Mapreduce?
  • Spark is amazingly speedy. As indicated by their cases, it runs programs up to 100x speedier than Hadoop MapReduce in memory, or 10x faster on circle. It reasonably utilizes RAM to make the snappier results.
  • In diagram perspective, you make many Map-reduce errands and a while later incorporate these assignments using Oozie/shell content. This framework is particularly monotonous and the guide reduce task has significant torpidity.
  • And consistently, decoding the yield out of one MR work into the commitment of another MR occupation may require creating another code in light of the way that Oozie may not work.
  • In Spark, you can basically do everything using single application/bolster (pyspark or scala comfort) and get the results in a split second. Trading between ‘Running something on group’ and ‘achieving something locally’ is truly straightforward and clear. This also prompts less setting switch of the fashioner and more prominent benefit.
  • Spark kind of reciprocals to MapReduce and Oozie set up together.
Q48) Is there is a point of learning MapReduce, by then?

Surely. For the going with reason:

  • Mapreduce is a perspective used by various gigantic data instruments including Spark. Accordingly, understanding the MapReduce perspective and how to change over an issue into game plan of MR endeavors is basic.
  • When the data creates past what can fit into the memory on your gathering, the Hadoop Map-Reduce perspective is still outstandingly noteworthy.
  • Almost, each other instrument, for instance, Hive or Pig changes over its inquiry into MapReduce stages. If you understand the MapReduce, you will have the ability to streamline your request better.
Q49) When running Spark on Yarn, do I have to introduce Spark on every single focus purpose of Yarn Cluster?

Since start keeps running indeed Yarn, it uses the yarn for the execution of its directions over the group’s hubs.

Q50) What is DAG and Stage in spark processing?

FYI the above program, the overall execution plan is as per the DAG scheduler.

For each and every method execution is optimized as per the stages.

Q51) code sample to read a data from mysql ?

spark.read.format(&#39;jdbc&#39;).options(driver=&#39;com.mysql.jdbc.Driver&#39;,url=&quot;&quot;&quot;jdbc:mysql://&lt;host&gt;:3306/&lt;&gt;db

?user=&lt;usr&gt;&amp;password=&lt;pass&gt;&quot;&quot;&quot;,dbtable=’besant&#39;,numPartitions=4 ).load()

Q52) What is hang on() ?

spark.read.format(&#39;jdbc&#39;).options(driver=&#39;com.mysql.jdbc.Driver&#39;,url=&quot;&quot;&quot;jdbc:mysql://&lt;host&gt;:3306/&lt;&gt;db

?user=&lt;usr&gt;&amp;password=&lt;pass&gt;&quot;&quot;&quot;,dbtable=’besant&#39;,numPartitions=4 ).load()

Q53) Write typical work procedure of a Spark program?

Each Spark program and shell session will fill in as seeks after:

  • Create some data RDDs from external data.
  • Transform them to describe new RDDs using changes like channel().
  • Ask Spark to persevere through() any widely appealing RDDs that ought to be reused.
  • Launch exercises, for instance, count() and first() to initiate a parallel figuring, which is then overhauled and executed by Spark.
Q54) Difference among cache() and persist()?

With cache(), you utilize just the default stockpiling level MEMORY_ONLY. With persist(), you can indicate which stockpiling level you want.So ache() is equivalent to calling hold on() with the default stockpiling level.Spark has numerous dimensions of determination to browse dependent on what our objectives are.The default continue() will store the information in the JVM pile as unserialized objects. When we work information out to circle, that information is additionally dependably serialized.Different dimensions of tirelessness are MEMORY_ONLY, MEMORY_ONLY_SER, MEMORY_AND_DISK, MEMORY_AND_DISK_SER, DISK_ONLY.

Q55) What is lineage graph?

As you get new RDDs from one another utilizing changes, Spark monitors the arrangement of conditions between various RDDs, called the ancestry chart. It utilizes this data to process each RDD on interest and to recoup lost information if part of a persevering RDD is lost.

Q56) Difference among map()and flatMap()?

The map()change takes in a capacity and applies it to every component in the RDD with the consequence of the capacity being the new estimation of every component in the subsequent RDD. Some of the time we need to deliver numerous yield components for each information component. The task to do this is called flatMap(). Similarly as with guide(), the capacity we give to flatMap() is called independently for every component in our info RDD. Rather than restoring a solitary component, we return an iterator with our arrival esteems.

Q57) What is reduce() action?

It takes a capacity that works on two components of the sort in your RDD and returns another component of a similar kind. A straightforward case of such a capacity is +, which we can use to whole our RDD. With lessen(), we can undoubtedly entirety the components of our RDD, tally the quantity of components, and perform different sorts of conglomerations.

Q58) What is Pair RDD?

Sparkle gives unique tasks on RDDs containing key/esteem sets. These RDDs are called pair RDDs. Pair RDDs are a valuable building obstruct in numerous projects, as they uncover tasks that enable you to follow up on each key in parallel.For precedent, pair RDDs have a reduceByKey() strategy that can total information independently for each key, and a join() technique that can consolidate two RDDs by gathering components with a similar key.

Q59) What are Accumulators?

Accumulators, gives a straightforward language structure to accumulating esteems from specialist hubs back to the driver program. A standout amongst the most widely recognized employments of aggregators is to tally occasions that happen amid occupation execution for investigating purposes.

Q60) What is Broadcast Variables?

Sparkle’s second kind of shared variable, communicate factors, enables the program to effectively send an expansive, read-just an incentive to all the laborer hubs for use in at least one Spark tasks. They prove to be useful, for instance, if your application needs to send an extensive, read-just query table to every one of the hubs.

Q61) What is Piping?

Spark gives a pipe() technique on RDDs. Sparkle’s pipe() gives us a chance to compose parts of occupations utilizing any language we need as long as it can peruse and keep in touch with Unix standard streams. With pipe(), you can compose a change of a RDD that peruses each RDD component from standard contribution as a String, controls that String anyway you like, and afterward composes the result(s) as Strings to standard yield.

Q62) What are advantages of Spark over MapReduce?
  • Due to the accessibility of in-memory preparing, Spark executes the handling around 10-100x quicker than Hadoop MapReduce. MapReduce makes utilization of diligence stockpiling for any of the information preparing undertakings.
  • Unlike Hadoop, Spark gives in-manufactured libraries to play out different errands structure a similar center like group handling, Steaming, Machine learning, Interactive SQL questions. Be that as it may, Hadoop just backings group preparing.
  • Hadoop is profoundly circle subordinate while Spark advances reserving and in-memory information stockpiling
  • Spark is equipped for performing calculations on different occasions on the equivalent dataset. This is called iterative calculation while there is no iterative processing executed by Hadoop.
Q63) Is there any advantage of learning MapReduce, at that point?

Truly, MapReduce is a worldview utilized by numerous huge information devices including Spark too. It is amazingly pertinent to utilize MapReduce when the information becomes greater and greater. Most instruments like Pig and Hive convert their questions into MapReduce stages to streamline them better.

Q64) What is Spark Executor?

At the point when SparkContext associate with a bunch chief, it obtains an Executor on hubs in the group. Agents are Spark forms that run calculations and store the information on the laborer hub. The last errands by SparkContext are exchanged to agents for their execution.

Q65) Name sorts of Cluster Managers in Spark?

The Spark structure underpins three noteworthy kinds of Cluster Managers:

  • Standalone: an essential administrator to set up a bunch
  • Apache Mesos: summed up/ordinarily utilized group chief, likewise runs Hadoop MapReduce and different applications
  • Yarn: in charge of asset the board in Hadoop
Q66) What are Executors?

Flash agents are specialist forms in charge of running the individual undertakings in a given Spark work. Agents are propelled once toward the start of a Spark application and commonly keep running for the whole lifetime of an application.Executors have two jobs. To begin with, they run the assignments that make up the application and return results to the driver.Second, they give in-memory stockpiling to RDDs that are stored by client programs.

Q67) What are the means that happen when you run a Spark application on a group?

The client presents an application utilizing flash submit.

  • Spark-submit dispatches the driver program and conjures the principle() technique indicated by the client.
  • The driver program contacts the bunch chief to request assets to dispatch agents.
  • The group director dispatches agents in the interest of the driver program.
  • The driver process goes through the client application. In light of the RDD activities and changes in the program, the driver sends work to agents as errands.
  • Tasks are kept running on agent procedures to register and spare outcomes.
  • If the driver’s primary() technique ways out or it calls SparkContext.stop(),it will end the agents and discharge assets from the bunch director.
Q68) What is Spark SQL?

Sparkle SQL is a module in Apache Spark that incorporates social processing(e.g., decisive inquiries and advanced stockpiling) with Spark’s procedural programming API. Flash SQL makes two principle additions.First, it offers a lot more tightly joining among social and procedural handling, through a decisive DataFrame API.Second, it incorporates an exceptionally extensible analyzer, Catalyst.

Enormous information applications require a blend of preparing strategies, information sources and capacity groups. The most punctual frameworks intended for these remaining burdens, for example, MapReduce, gave clients an amazing, however low-level, procedural programming interface. Programming such frameworks was grave and required manual enhancement by the client to accomplish elite. Therefore, various new frameworks tried to give an increasingly profitable client experience by offering social interfaces to huge information. Frameworks like Pig, Hive and Shark all exploit revelatory inquiries to give more extravagant programmed improvements.

Q69) What is a construction RDD/DataFrame?

A SchemaRDD is a RDD made out of Row objects with extra construction data of the sorts in every segment. Column objects are only wrappers around varieties of fundamental sorts (e.g., whole numbers and strings).

Q70) How is Spark different from Hadoop?

They differ in speed, processing, and interactivity. Spark is faster as compared to Hadoop. Spark supports real-time as well as batch processing whereas Hadoop supports batch processing. Spark also provides interactive modes unlike Hadoop with no interactivity provision.

Q71) Explain the use of Apache Spark?

Used for real-time processes, Spark is a representation of the open-source framework. It has a flourishing open-source active community.

Q72) What are Spark’s key features?

Main features of Spark include its speed, Polyglot, Multiple Format Provision, Real-Time Operations, Lazy Evaluation, Hadoop Integration, and Machine Learning, etc. These features are explained as follows:

  • Polyglot means provision of APIs of high-level in languages like Java, Python, Scala, and R. It also provides an in Scala shell and Python shell. The Scala shell is accessed using ./bin/spark-shell similarly Python shell using ./bin/pyspark.
  • Speed of Spark is quicker than Hadoop. Spark is also able to get this speed using controlled segregation. It handles data with partitions helping parallelize dispersed data processing using minimal traffic on the network.
  • Multiple Formats in Spark such as JSON, Parquet, Hive, Cassandra, etc are very useful. Data sources are more than simple pipes to convert data then keep it into Spark.
  • Evaluation in Spark is called lazy evaluation as it is delayed until necessary. This also contributes to the speed.
  • The real-time operation has less latency since its in-memory operational models are supported by production clusters
  • Hadoop Integration is a great advantage, especially for those who started careers with Hadoop. It can be executed on top of prevailing Hadoop cluster.
  • Machine Learning component of Spark is its MLlib and it is handy for big data. It eliminates the need for using different tools for processing in ML.
Q73) Name all the languages that Spark supports?
  • Scala (Interactive shell)
  • Python (Interactive shell)
  • Java
  • R
Q74) Is Spark better than MapReduce?

There are some features of Spark that make it superior.

  • It is faster than MapReduce because of in-memory processing.
  • Its inbuilt libraries can do multiple tasks ranging from bath processes to steaming and ML.
  • It also provides caching.
Q75) Explain YARN?

YARN feature of Spark provides a dominant resource supervision platform and delivers scalable processes over the cluster. It is a circulated container manager, while Spark is meant for data processing.

Q76) Would you mount Spark on every YARN node?

No, because it executes on top of it. Spark runs individualistically from its connection. Spark has many choices to make use of YARN after dispatching tasks to the cluster. Supplementary, there are many arrangements to run YARN. Some of these are master and deploy-mode or driver-memory etc.

Q77) Would you consider learning MapReduce?

MapReduce shows a paradigm which is useful in big data gears. It is useful in Spark too. It is tremendously relevant to practice MapReduce as the information grows larger and larger. Most tools corresponding Pig, as well as Hive, translate their requests into MapReduce segments to improve them well.

Q78) What is RDD?

Also known as RDD, Resilient Distribution Dataset is known a fault-tolerant group of working elements running in parallel. Segregated data of RDD is unchallengeable and scattered in nature Parallelized Collections and Hadoop Datasets are two kinds of it. RDD is essentially part of data kept in memory dispersed across nodes. They are lazily assessed in Spark. Also called lazy evaluation.

Q79) What is the process of RDD creation?

Use the following steps:

  • Parallelize an assembly in Driver program. OR
  • Load an exterior dataset from storage like collective file organization.
Q80) Explain Executor Memory?

Spark application comes with fixed heap and fixed hubs for a spark executor. Spark executor memory is heap size. It is controlled with a property named spark.executor.memory. All spark applications have one executor each. The executor memory is fundamentally a measure of memory utilized by the worker node.

Q81) What are Partitions in Spark?

It is a smaller division of data. It represents a logical portion of a big circulated data set. Dividing is the procedure to derive logical data units. Spark handles data with these partitions and aid in parallelizing distributed processing with negligible network traffic. Spark attempts to read data from close nodes into an RDD. Meanwhile, Spark typically accesses distributed divided data, to improve transformation processes it creates walls to hold the chunks. The whole thing in Spark is a separated RDD.

Q82) Explain operations supported by RDD?

Following operations are supported by it:

  • Transformations: To create novel RDD. Transformations are implemented on demand. They are also computed lazily in general.
  • Actions: Actions always return final outcomes of RDD computations. They trigger execution using ancestry graph for loading of data into novel RDD, carrying out all middle transformations then return outcomes to Driver program.
Q83) Explain Spark Transformations?;

Transformations are applied on RDD, and they produce another RDD. Some examples are map() plus filter(). The filter() is used to create a new RDD after selecting elements of current RDD passing function argument. Transformations would be evaluated lazily.

Q84) What are the Spark Actions?

It is used to bring the data back form RDD. It is brought back to the local machine. Former transformations are the reason for this action run. Actions cause execution by lineage graph and cause loading of data in original RDD, then implement all midway transformations then return concluding outcomes to Driver program.

To implement functions passed over again reduce() is used as an action. take() action is used to take values of RDD to the resident node.

Q85) What functions are there in SparkCore?

It is the distributed implementation engine. SparkCore performs several important functions such as memory management, job monitoring, fault-tolerance, scheduling, storage systems interaction.

Q86) What is Pair RDD?

Special operations are performed on RDD using key/value pairs. These are called Pair RDDs. They allow users access to each key. Using reduceByKey() method data is collected based on key- join() method. Join() combines dissimilar RDDs organized, based on the essentials having the equal key.

Q87) What are the Spark Ecosystem components?

Following are the main component of this system:

  • Spark Core: It is a Base engine used by large-scale parallel, distributed processing of data
  • Spark Streaming: It is used to process simultaneous data streaming
  • Spark SQL: It integrates programming API of Spark with relational processing
  • GraphX: For Graph-parallel design
  • MLlib: Achieves machine learning with Apache Spark
Q88) Explain Spark streaming?

This feature of Sparks empowers high-throughput, fault-tolerant processing of live data.The central stream element is DStream and it is fundamentally a series of RDDs. The data coming from diverse sources like HDFS, Flume is streamed, lastly administered to file arrangements, live dashboards besides databases.

Q89) Which API is used to implement Spark graphs?

GraphX is used to implement graphs in addition to graph-parallel computation. Consequently, it outspreads the Spark RDD by means of a Resilient Distributed Property Graph, which is directed multi-graph with multiple edges. The parallel edges permit multiple associations between the equal vertices. For graph computation, fundamental operators are exposed by GraphX. In addition, it includes an increasing group of graph algorithms in addition to builders to streamline graph analytics errands.

Q90) Explain PageRank?

It is used to measure the standing of each vertex of a graph, presumptuous an edge from a to b signifies an endorsement of b’s importance by a. For instance, Twitter user followed by a maximum number of users will be positioned higher. Static PageRank will run for a fixed set of iterations, whereas dynamic PageRank will run until the grades converge.

Q91) Explain machine learning implementation in Spark?

Spark uses ML library named as MLlib. Its objectives are to make machine learning informal and ascendable with common algorithms or use cases such as clustering, dimensional reduction, etc.

Q92) How is SQL implemented in Spark?

To integrate functional programming API with relational processing, Spark SQL is used. It supports enquiring data using SQL, using Hive Query, etc.  Supplementary, it provides provision for many data sources besides making it conceivable to weave SQL requests with code alterations thus ensuing powerful tool.

Spark SQL libraries:

Data Source and DataFrame API, Optimizer and Interpreter and SQL Service

Q93) Explain the term Parquet file?

It is a file of columnar format maintained by numerous data processing systems. It is used to perform both read-write operations in Spark SQL. It is the most desirable data analytics formats. Its columnar approach comes with the following advantages:

  • Columnar storage confines IO operations.
  • Fetching of specific columns.
  • Columnar storage eats less space.
  • Better-summarized information and type-specific encrypting.
Q94) Can you use Apache Spark alongside Hadoop?

They both make a powerful group together. Hadoop’s HDFS is being run by Spark on top. MapReduce of Hadoop can be used with Spark. Many Spark applications are run on YARN. MapReduce with Spark provides batch and real-time processing respectively.

Q95) Explain RDD Lineage?

There is no concept of data replication in Spark. RDD lineage is used to build any lost data. RDD lineage constructs partitions for lost data.

Q96) Explain the term Spark Driver?

It is a program running on the master node and declares transformations, actions, etc. on data RDDs. A driver in Spark will create SparkContext, which is connected to a Spark Master. The driver also carries the RDD graphs for Master, here the separate cluster manager will run.

Q97) Name file systems maintained by Spark?
  • Amazon S3
  • Hadoop Distributed File System (HDFS).
  • Local File system.
Q98) What are some functions related to Spark SQL?
  • Loading of data from a diversity of structured foundations.
  • Enquiring data with SQL statements.
  • Providing rich amalgamation between SQL besides regular Python/Scala/Java code.
Q99) Explain Spark Executor?

SparkContext links to a cluster manager and obtains an Executor on cluster nodes. Executors are processes of Spark that run calculations and save the data on the operative node. The last SparkContext tasks are relocated to executors and executed.

Q100) What are some Spark Cluster Managers?

These are the following:

  • Standalone: used for a cluster set up.
  • Apache Mesos: runs Hadoop applications etc.
  • YARN performs resource management of Hadoop.
Q101) Explain the worker node?

They are used to run application code inside the cluster. The driver program should listen to and receive incoming networks from executors. They are preferred if they are network addressable. Worker node could be a slave node. To assign work Master node is used. Worker nodes implement the data saved on the node. It will account the resources to the main or master node. Based on the availability of the resource, the master lists tasks.

Q102) What are the demerits of Spark?
  • Spark uses more storage compared to Hadoop etc.
  • Developers should be cautious while running applications in Spark.
  • The work needs to be distributed over manifold clusters.
  • “in-memory” capability becomes a bottleneck for cost-efficient processing.
  • Spark consumes a vast amount of data.
Q103) Name use cases best utilized in Spark than Hadoop processing?

These are the following:

  • Sensor Data Processing: “In-memory” computing is best worked here because data is recovered and united from diverse bases.
  • Real-Time Processing: Real-time querying of Spark is famous in banking, healthcare, transportation, etc.
  • Stream Processing: Live streams alerts are best processed in Spark.
  • Big Data Processing: Speed of Spark is faster than Hadoop.
Q104) Explain the Sparse Vector?

It comes with two parallel arrays:

  • For indices
  • For values.

These vectors store non-zero passes to save space.

Q105) What is the use of Spark in Cassandra databases?

Once you connect Spark with a Cassandra cluster and add Spark project with Cassandra Connector, a Spark executor talks to local Cassandra node. It queries local data, makes queries sooner by reducing the practice of the network for sending data between Spark initiators/executors and nodes of Cassandra.

Q106) Can you use Apache Mesos to run Spark?

Yes, it is possible. When Mesos is used, the Mesos master substitutes the Spark master with cluster manager. Task management by different machines is determined by Mesos. It considers various frameworks and schedules short-lived tasks.

Q107) Explain the process of connecting Spark to Mesos?
  • Spark driver program is configured and connected to Mesos.
  • Spark binary package is kept so that it is accessible by Mesos.
  • In a similar location, install Apache Spark
  • Configure ‘spark.mesos.executor.home’
  • It will point to the location of its installation.
Q108) What is data transfer minimization in Spark?

This process helps write programs in Spark that run fast and in a dependable manner. Many ways are there for that:

  • Broadcast Variable- To increase the competence of joins of small/large RDDs.
  • Accumulators –Help update the variable values in parallel during execution.
Q109) Explain broadcast variables?

They allow the programmer to saving a read-only variable that was cached on every machine. They are used to provide nodes copy of a huge input dataset in an effectual style. Spark attempts to allocate transmission variables by means of well-organized broadcast algorithms to decrease communication cost.

Q110) What is Spark accumulator?

These are variables added over associative as well as commutative operation, used to execute counters. Tracking them in the UI is useful for comprehending the development of running stages. It supports mathematics accumulators.

Q111) What is the need for broadcast variables?

Broadcast variables represent read-only variables, existing in-memory cache. They are present on every machine. Their use reduces the need to ship duplicates of a variable for every single task, so data is processed sooner. Broadcast variables benefit in the storage of lookup table in the memory that would enhance the retrieval competence when taken in comparison to an RDD lookup().

Q112) What is the triggering of automatic clean-ups?

You can trigger them by setting ‘spark.cleaner.ttl’ or divide the extended running tasks into dissimilar consignments and writing the intermediate outcomes to the disk.

Q113) Explain the Sliding Window process?

Sliding Window is used to control the spreading of data packets amid numerous computer networks. Spark Streaming collection offers windowed computations. Here alterations on RDDs are applied with a sliding window. Every time the window slides, RDDs falling within the specific window are united and functioned upon to yield new RDDs of the DStream windowed.

Q114) Explain DStream?

Discretized Stream or DStream represents abstraction offered by Spark Streaming. A nonstop stream of data, it is acknowledged from a source of data or from a managed data stream produced by altering the input stream. On the inside, a DStream is characterized by a nonstop series of RDDs. Every RDD has data from the interval. The operation used on DStream interprets to actions on the original RDDs.

Q115) Name Spark SQL sources?

Parquet file, Hive tables, JSON datasets are some of these.

Q116) Name Spark’s levels of persistence?

Spark mechanically persists the intermediate data from numerous shuffle actions, nevertheless, it is often recommended that operators call persist () process on the RDD to reuse it. There are many persistence levels in Spark:

  • MEMORY_ONLY: It will store RDD in the form of deserialized Java objects. When RDD is not fitting in memory, many partitions are not cached and are recomputed, as needed. MEMORY_ONLY represents the default phase.
  • MEMORY_AND_DISK: When RDD is not fitting in memory, it will store the partitions not fitting on disk.
  • MEMORY_ONLY_SER: It will store RDD as sequential Java objects.
  • MEMORY_AND_DISK_SER: It is same as MEMORY_ONLY_SER, however, it will spill panels that don’t fit memory to disk as a substitute to recompute them.
  • DISK_ONLY: It will store the RDD panels on disk only.
  • OFF_HEAP: It is same as MEMORY_ONLY_SER, however, keeps the data in off-heap.
Q117) Are there any checkpoints in Spark?

Lineage graphs are continuously useful to mend RDDs from a catastrophe but this is usually inefficient if the RDDs have extended lineage chains. Apache Spark uses an API as a checkpointing procedure i.e. a REPLICATE flag to persevere. Nevertheless, the choice to the checkpoint is on the client. Checkpoints are valuable when the lineage charts are long besides have extensive dependencies.

Q118) Would you install Spark on every YARN cluster node, while using Spark applications?

It is not necessary to do that. Because Spark executes on top of YARN without disturbing any alteration to the cluster.

Besant Technologies WhatsApp