Data Engineer at BMFSFJ – Startseite
High-fives from Vik, Celeste, Casey, Anna P, Anna S, Anishta, Bruno, Elena, Mike, Daniel, and Brayan.
Want to write smarter code learn these 3 simple python concepts, why learn python next, learn faster and retain more. dataquest is the best way to learn.
Find centralized, trusted content and collaborate around the technologies you use most.
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Get early access and see previews of new features.
I'm trying to make presentation slides from Jupyter Notebook but there is no button to begin presentation mode. So, I'd like to know if there is any shortcut to start the presentation or any way to make that button appear. BTW, I use Python2.7 and already installed RISE. Thanks.
My issue was I tried to enable the "Enter RISE" button. So, to fix my issue, I used conda install -c damianavila82 rise instead of pip install RISE (I normally use pip to install new Python library, anyway).
However, if you are looking for a shortcut for the presentation mode you can try "Atl + r" for entering and exiting RISE. I'm using Windows10, by the way.
You can either install RISE with conda with
as already mentioned in your own answer . This will install and set up Jupyter and RISE so that you can use RISE from Jupyter.
Or you can install RISE with pip like you originally did and then make Jupyter aware of it with
Apparently, the jupyter-nbextension install step copies the required JavaScript and CSS files from where pip placed them to where Jupyter can find them, while the jupyter-nbextension install step tells Jupyter to actually use them and RISE.
Both ways (and a third one using the RISE source code repository) are documented at https://damianavila.github.io/RISE/installation.html
Reminder: Answers generated by artificial intelligence tools are not allowed on Stack Overflow. Learn more
Post as a guest.
Required, but never shown
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .
Quantum computing proposes a revolutionary paradigm that can radically transform numerous scientific and industrial application domains. To realize this promise, new capabilities need software solutions that are able to effectively harness its power. However, developers face significant challenges when developing quantum software due to the high computational demands of simulating quantum computers on classical systems. In this paper, we investigate the potential of using an accessible and cost-efficient manner remote computational capabilities to improve the experience of quantum software developers.
I introduction.
Quantum computing holds great promise as a revolutionary technology that can transform various scientific and industry fields. By harnessing the principles of quantum mechanics, quantum computers can perform complex calculations and solve problems that are currently intractable for classical computers. This promises breakthroughs in areas such as cryptography, optimization, drug discovery, materials science, or machine learning. Although quantum advantage has been declared in experiments where quantum computing hardware has shown to provide a significant computational advantage over classical alternatives in specific problems [ 1 ] , we still have to work for the foreseeable future with Noisy Intermediate-Scale Quantum (NISQ) computers. These computers employ a hybrid computational model in which a classical computer controls a noisy quantum device. Even as NISQ devices are not capable of providing the quantum advantage promised by quantum algorithms [ 2 ] , they are an invaluable platform for research and experimentation.
However, even with the steady advancements in terms of qubit counts [ 3 , 4 ] , current NISQ computers are out of reach of most developers due to their scarcity and high operational costs. Therefore, quantum software developers have to rely on simulators running on classical computers to experiment with quantum software. While it is straightforward to start the development process on commonly used hardware, running larger circuits necessitates specialized graphical processing units (GPU) that are found in high-end consumer products (e.g. mobile workstations) or high-performance computing infrastructure (e.g. clusters of GPUs). Therefore, a developer would require either deep technical knowledge to configure the software stack required for using the advanced GPU capabilities to be able to run circuits up to 31 qubits [ 5 ] , or access to a supercomputer for running circuits with 40 qubits [ 6 ] .
Our approach to improve the quantum software development experience is to execute the quantum software routines on Qubernetes clusters [ 7 ] , which manage the necessary computational resources. The solution is packaged as an easy-to-use Jupyter kernel, therefore the developers are not directly exposed to the complexities of operating the cluster where the quantum routines are executed, allowing them to switch between the local and the remote development environments when the number of qubits in the quantum circuits become large.
The rest of the paper is organised as follows. Section II presents the background and motivation behind this work. Section III describes the implementation of the solution. Section IV describes the test environments and discusses the performance. Concluding remarks and future work are presented in Section V .
The software development life cycle (SDLC) of hybrid classic-quantum applications consists of a multi-faceted approach [ 8 ] , as depicted in Figure 1 . At the top level, the classical software development process starts by identifying user needs and deriving them into system requirements. These requirements are transformed into a design and implemented. The result is verified against the requirements and validated against user needs. Once the software system enters the operational phase, any detected anomalies are used to identify potential new system requirements, if necessary. A dedicated track for quantum components is followed within the SDLC [ 9 ] , specific to the implementation of quantum technology. The requirements for these components are converted into a design, which is subsequently implemented on classic computers, verified on simulators or real quantum hardware, and integrated into the larger software system. During the operational phase, the quantum software components are executed on actual quantum hardware. The scheduling ensures efficient utilization of the scarce quantum hardware resources, while monitoring capabilities enable the detection of anomalies throughout the operational stage.
As quantum computers are a scarce resource, it is not practical to develop quantum software components directly on hardware. Instead, developers should use simulators that use commonly available and less expensive classical resources (e.g., CPUs and GPUs [ 10 ] ), for the early stages of development and testing. Later on, they can use more sophisticated simulators that are able to simulate the noise of actual hardware. Only when the components are mature enough the development can be continued on quantum processing units (QPU), the actual hardware that will be used during the execution phase. However, as the implementation of quantum software stack trades off the visibility of the execution process for usability [ 11 ] , developers have to experiment and iterate on devices and simulators to determine the actual behaviour of their programs. This approach ensures that the use of quantum resources is efficient and effective.
Qiskit is a Python library and a quantum development toolkit designed to accommodate different types of Quantum Computers in NISQ era. It allows algorithm designers develop applications leveraging quantum computing, circuit designers to optimize cirquits and explore its properties like error correction, verification and validation. Qiskit offers also tools to research and optimize gates, with precise control and ability to explore noise, apply dynamical decoupling and perform optimized control theory. Qiskit is an open-source project and currently offers dozens of additional libraries, plugins, simulator backends, application packages for multiple domains such as machine learning, physics, chemistry and finance and other related projects available. In Qiskit there are also several transpiler plugins available for users to optimize and interact with the transpiling process 1 1 1 https://qiskit.github.io/ecosystem/ . Among Qiskit, IBM offers OpenQASM and OpenPulse. OpenQASM is an imperative language whose main purpose is to act as an intermediate representation for high-level compilers for QC hardware. It offers precise control over gates, measurement and conditionals 2 2 2 https://openqasm.com/intro.html . OpenPulse is a specification for pulse-level control, for general-purpose QC and it is designed to be hardware architecture agnostic and enable experimentation with a higher level of control [ 12 ] . Qiskit Aer 3 3 3 https://qiskit.github.io/qiskit-aer/index.html is Qiskit library with high-performance QC simulators and noise models. Some simulators included in Aer have support for leveraging Nvidia CPUs with Cuda version 11.2 or newer. Qiskit, Qiskit Aer and Cuda relations in the development and execution environment are presented in Figure 2
PennyLane 4 4 4 https://pennylane.ai/ is a Python library specialized in machine learning for quantum computing by enabling the use of popular, commonly used classical machine learning frameworks, like TensorFlow 5 5 5 https://www.tensorflow.org/ . PennyLane is designed to support various executions with variable QC simulators and actual QC hardware, handling the communication with the device and compiling the circuits. The library includes a basic simulator backend and has GPU enabling PennyLane Lightning 6 6 6 https://docs.pennylane.ai/projects/lightning/ plugin with three different high-performance backends. Lightning GPU uses NVIDIA cuQuantum SDK to enable accelerating the simulation of quantum state vectors, it supports CUDA-capable Nvidia GPUs.
Nvidia CUDA 7 7 7 https://developer.nvidia.com/cuda-zone is a computing platform developed for GPUs, for computationally demanding tasks suitable for parallel computing with up to thousands of threads. cuQuantum 8 8 8 https://docs.nvidia.com/cuda/cuquantum/ is an SDK based on CUDA, offering libraries for Quantum computing, with two libraries, cuStateVec for state vector computation and cuTensorNet for tensor network computation. cuStateVec is used by gate-based general quantum computer simulators, providing measurement, gate application, expectation value, sampler and state vector movement. CuStateVec library is available for Cuda versions 11 and 12. Nvidia cuQuantum is used by both PennyLane Lightning and Qiskit Aer, for their GPU-powered quantum simulator backends.
Developing across all target execution environments exposes the quantum software developer to a wide range of technologies that force them to balance their primary development activities with deep dives into operational aspects like configuring and maintaining their development environments or getting access to compatible hardware accelerators for running the relevant simulators. For example, Fig. 2 provides an overview of the software stack that application or algorithm developers using the Qiskit tools must to be aware of. The situation is similar for other mainstream toolkits like PennyLane or Cirq 9 9 9 https://quantumai.google/qsim/cirq_interface . Experimental programming toolkits like Eclipse Qrips [ 13 ] , leverage the existing Cirq or Qiskit assets to be able to execute circuits on GPU-accelerated simulators.
JupyterLab 10 10 10 https://jupyter.org offers a versatile and user-friendly interactive computing platform suitable for data science, scientific computing, machine learning, and quantum computing. With its flexible architecture and extensive plugin ecosystem, it allows its users to develop customized workflows tailored to their specific needs, such as data exploration, prototyping algorithms or creating interactive presentations.
The key enabler of Jupyter is the notebook, an interactive and collaborative document formed by a collection of cells that can contain code, Markdown 11 11 11 https://spec.commonmark.org/current/ formatted text, equations or interactive widgets. A kernel is a computational engine that executes the code contained within the notebook. Jupyter supports multiple programming languages through different kernels, such as Python, R, Julia, and others. Users can select the desired kernel depending on their preferred programming language for a specific notebook. These combined capabilities allow scientists and algorithm developers to perform their work using a combination of code, explanatory text, and visualizations, making it easier to experiment, iterate, and document the development process.
JupyterHub 12 12 12 https://jupyter.org/hub expands the functionality of JupyterLab to groups of users, giving them access to computational environments and resources without the burden of installation and maintenance tasks. The project provides two distributions: The Littlest JupyterHub – suitable for small group of users, typically less than 100, can be installed on a single virtual machine, and Zero to JupyterHub for Kubernetes 13 13 13 https://z2jh.jupyter.org/en/latest/index.html – suitable for large number of user, makes extensive use of container technologies, cloud resources and infrastructure. The container that runs JupyterLab can be customised following the Jupyter Docker Stacks 14 14 14 https://jupyter-docker-stacks.readthedocs.io/en/latest/index.html convention, allowing the user to run quantum algorithms in GPU accelerated simulators like Qiskit Aer or PennyLane Lightning. However, as the pod life cycle is linked to the user session, the GPU is locked by the user’s pod regardless if the Python kernel executes code or not, a utilization pattern that is not optimal.
Cloud computing allows the development of scalable applications [ 14 ] , which rely on computing resources like computing power, storage and databases that are accessed on a pay-per-use basis. Through the extensive use of application programming interfaces (APIs), teams formed of software developers and operators can scale these resources up and down in response of the users’ needs. This entails designing applications as small, loosely coupled components that can be bundled with their dependencies into portable containers and deployed on the immutable infrastructure. Furthermore, integrated monitoring and logging offer valuable insights into performance, health, and behaviour, empowering a swift response to potential anomalies. Kubernetes is the industry-standard container orchestration platform for automating deployment, scaling, and management of containerized cloud-native applications. Developed as an open-source solution by Cloud Native Computing Foundation (CNCF) 15 15 15 https://www.cncf.io/ , together with the myriad of projects that offers supporting functionality, it allows users to deploy applications on the managed infrastructure of the major cloud providers (e.g., AWS EKS 16 16 16 https://aws.amazon.com/eks/ , Azure AKS 17 17 17 https://azure.microsoft.com/en-us/products/kubernetes-service , or GCP GKE 18 18 18 https://cloud.google.com/kubernetes-engine ), smaller or regional cloud providers, or on-prem – using own infrastructure.
Qubernetes [ 7 ] (or Kubernetes for Quantum) exposes the quantum computation concepts like tasks and hardware capabilities following established cloud-native principles, as Kubernetes jobs and quantum-capable nodes. Following this conventions, allows the seamless integration of quantum computing into the larger Kubernetes ecosystem.
High-performance computing (HPC) relies on using supercomputers and parallel processing techniques to solve complex computational problems quickly and efficiently, in application domains that require massive computational power [ 15 ] . HPC systems typically consist of multiple interconnected processors or nodes that work together to execute tasks in parallel, enabling large-scale simulations, data analysis, and scientific computations, leveraging the Open Message Passing Interface (OpenMPI 19 19 19 https://www.open-mpi.org ) compatible architectures.
Quantum computing enables the existing base of cloud-native and HPC applications to accelerate appropriate computational tasks. Two notable approaches for integrating the two software stacks are HPC-QC [ 16 ] , which uses the OpenMPI, and XACC [ 17 ] approach based on the OSGi 20 20 20 https://www.osgi.org architecture. Similarly, Qiskit’s quantum-serverless [ 18 ] proposes a cloud-based approach for running hybrid classical-quantum programs. The proposed programming model, conforming to the RAY 21 21 21 https://www.ray.io computing framework, makes it easy to scale Python workloads on a Kubernetes cluster in which the quantum execution environment is represented by a distributed Qiskit runtime that allows transparent access to multiple QPUs. Despite all these efforts, the integration of quantum computing into classical paradigms is fragmented. The EuroHPC aims to address this with the Universal Quantum Access [ 19 ] development.
Iii-a system architecture and components.
The solution enables a quantum software developer to run quantum routines or programs using GPU-accelerated simulators (e.g. Qiskit Aer or Pennylane Lightning) on a Qubernetes cluster with better computational resources compared to their personal laptop. The solution involves a custom Jupyter kernel 22 22 22 https://github.com/torqs-project/q8s-kernel (e.g., q8s_kernel ), and a compatible cluster that has at least one quantum capable node that allows the execution of GPU-accelerated containers via the Nvidia Container Toolkit 23 23 23 https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/index.html . To utilize the solution, the developer must install the kernel and specify the location of the configuration file of the cluster (e.g., kubeconfig 24 24 24 https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/ ) as an environment variable. Through the user interface of the Jupyter Notebook/Lab, the user can switch between the local development kernel (e.g. IPython ) and the remote Qubernetes cluster. The system architecture and components are detailed in Fig. 4 .
[t] {minted} [fontsize= ,xleftmargin=1em,linenos=true,highlightlines=12-13,16,19,23]yaml apiVersion: batch/v1 kind: Job metadata: name: ”quantum-job” spec: template: metadata: name: ”quantum-pod” spec: containers: - name: ”quantum-task” image: registry.com/user/job-dependencies:v1 command: [”python”, ”/app/main.py”] resources: requests: nvidia.com/gpu: ’1’ # requires GPU usage volumeMounts: - name: config-volume mountPath: /app volumes: - name: config-volume configMap: name: task-files #”main.py”: ”code” restartPolicy: Never Quantum job specification
The execution flow is triggered by the user pressing the run button in the notebook. When the kernel receives the do_execute command, it detects the dependencies in the cell code and prepares the container specification (e.g., Dockerfile and requirements.txt ), using as base a pre-build image that includes all dependencies for the CUDA version supported in the cluster. The kernel builds the image and pushes it to the container registry. Then it creates a Kubernetes Job specification that corresponds to the execution task (see Listing 5 ), and a ConfigMap that contains the actual code that will be mounted as a volume in the Pod. Once the cluster API server receives the request, it schedules the job when the requested GPU resources are available. The Pod pulls the image from the Registry and executes the tasks. The kernel polls the API server for the Job’s status waiting for completion, then collects the logs and cleans up, by deleting the Job and the ConfigMap. Depending on the container’s exit code (e.g. success for 0, or failure otherwise), the kernel returns the result to the notebook on the stdout or stderr respectively. The kernel rebuilds the image and the pod pulls the image only when the dependencies change. The task execution sequence is depicted in Fig. 5 .
In this section, we introduce the test scenarios and evaluate the use of the custom Jupyter kernel from the ease of use and cost effectiveness perspectives.
Scenario | Hardware category | Model/Provider | CPU | GPU (CUDA compatible) |
---|---|---|---|---|
Baseline | Business laptop | Dell Latitude 7440 | Intel i5-1345U 16GB | - |
Mobile workstation | Mobile workstation | Dell Precision 7680 | Intel i9-13950HX 64GB | Nvidia GeForce RTX 4090 Laptop 16 GB |
Cloud GPU | Cloud server | puzl.cloud | 2 vCPUs up to 64GB | Nvidia A100 40GB |
We have tested the Jupyter kernel for Qubernetes in the following scenarios that we consider representative of how it will be used. The baseline consists of the user running the development environment (e.g. the Jupyter Notebook/Lab) and executing the quantum routine on his own laptop. The following test scenarios employ CUDA capable GPUs accessed remotely in Qubernetes clusters:
Cluster with mobile workstation - Users with better hardware share their computational resources (e.g. a mobile workstation) with the rest of the team. The users run the development environment similar to the baseline scenario, but the quantum routines are executed on the mobile workstation.
Cluster with cloud GPUs - The user runs the development environment on his own laptop and executes the quantum routine experiments on a Qubernetes cluster operated by a commercial entity. In our case, we have selected Puzl 25 25 25 https://puzl.cloud/ , a provider that offers access to Nvidia A100 40GB GPUs. The cost of using the GPU resources is approximately 1.6 EUR/h, in line with other cloud infrastructure providers. The charging model is based on effective utilization of the GPU resource, e.g., the effective time the Job runs to completion. The cluster is shared with the other Puzl users that execute their own workloads while our quantum routines are executed.
The detailed hardware configurations of the devices used in the test scenarios are described in Table I .
Ease of use - The solution is perceived by the user as a standard Jupyter kernel, see Fig. 6 . To function properly, the implementation relies on Docker and Kubernetes, widely used tools supported on a multitude of operating systems. The selection of the cluster where the quantum task execution is performed is achieved by providing the kubeconfig configuration file as an environment variable. The solution does not require a deep understanding of Kubernetes cluster management beyond the configuration file. As such the user is not exposed to the complexities of enabling access to the GPUs or configuring the computational layer of the CUDA, cuQauntum.
Cost effectiveness - The GPU resources available in the cluster are used only while the quantum accelerated job is executed. This behaviour allows the GPU resource of the mobile workstation to be used by other users of the cluster, maximizing its utilization of resources already acquired by the organization. Similarly, as the cloud GPU resource is charged per use, releasing the resource minimizes the cost. As such, the Jupyter kernel for Qubernetes increases the utilization in a cost efficient manner of the GPU resources, comparing with the standard JupyterHub on Kubernetes approach.
This paper explores the potential of using remote computational resources available in Qubernetes clusters to enhance the experience of quantum software developers across three key aspects: execution speedups, ease of use and cost-effectiveness. To achieve this objective we have developed a custom Jupyter kernel that packages the notebook cells as Kubernetes jobs and executes them on clusters that have advanced CUDA computing capabilities. Moving forward, we plan to extend this functionality to other development environments beyond notebooks.
This work has been supported by the Academy of Finland (project DEQSE 349945) and Business Finland (project TORQS 8582/31/2022).
IMAGES
COMMENTS
RISE also works with Notebook widgets. Try creating a new cell with the following code: from ipywidgets import interact. def my_function(x): return x. # create a slider. interact(my_function, x=20) Now start the slideshow on that cell and try running the cell (SHIFT+ENTER). You should see something like this:
2.1K. 12. When you make a slide presentation, there are a few programs you likely think of: Microsoft PowerPoint, Google Slides, Prezi (just kidding). PPT and Slides are great applications, to be ...
Remember that in Jupyter Notebook cells are the units containing code and markdown. There are 5 options for each cell. Slide, Sub-slide, fragment, skip and notes.
This is an alternative to copy-and-pasting screen captures into other presentation software. The first step is to enable the Slideshow option in the View > Cell Toolbar options. Just click on the Slideshow option and continue reading. Enable Slideshow. Each cell in the Jupyter Notebook will now have a Slide Type option in the upper-right corner.
Jupyter Notebook Presentations might be a great alternative to traditional presentation software. You will save time by building the presentation in the same environment where your code is. Any update in code or chart change will immediately affect the presentation - no need to manually copy-paste results. ...
Open Jupyter Notebook in VS Code. In order to set your code blocks as slides, right-click the bottom of the code block and select "Switch Slide Type". Switch Slide Type. Next, we can select ...
One of them is "Jupyter's built-in slideshow feature". To create interactive slideshows in Jupyter Notebook with the help of its built-in feature, perform the following steps: Step 1: Open a New Notebook. To start with, open a new notebook and rename it. Step 2: Create new Slides.
Creating presentations in Jupyter Notebook is a great alternative to manually updating slides in other presentation creation software. If your data changes, you just re-execute the cell and slide chart is updated. Jupyter Notebook is using Reveal.js (opens in a new tab) for creating slides from cells. The standard approach is to write slides ...
After configured each cell in Jupyter Notebook using Anaconda, you can now follow the steps in the previous section "Convert ipynb file to HTML file" to turn your Jupyter Notebook into interactive HTML Presentation Slides. The generated output is a single HTML file, making it incredibly convenient to share and save on a USB drive.
The best way to walk through this tutorial is using the accompanying Jupyter Notebook: [Jupyter Notebook] - In the last year I've started presenting work using Jupyter Notebooks, rebelling against the Bill Gates'-driven status-quo. Here I'll explain how to do it. It's not difficult, but in my opinion makes presentations look slicker, whilst allowing you to run code live in a presentation ...
Using Jupyter Notebooks for presentations. I begin my presentations by using Markdown and code blocks in a Jupyter Notebook, just like I would for anything else in JupyterLab. I write out my presentation using separate Markdown sections for the text I want to show on the slides and for the speaker notes. Code snippets go into their own blocks ...
This will run a server which opens the presentation in your browser ready for presentation. Another neat thing is RISE, a Jupyter slideshow extension that allows you to instantly turn your Jupyter Notebooks into a slideshow with the press of a button in your notebook:. Export as PDF. Finally, if you want to create a PDF from your slides, you can do that by adding ?print-pdf to the url of the ...
Creating presentations from Jupyter notebooks. January 12, 2022 . 2022 · python jupyter data visualization The Jupyter notebook, controversial though it may be, is a core data exploration and experimentation platform for many in the data and scientific communities. In particular, its combination of REPL-like input and embedded HTML output make ...
The process I followed to create the slides is very simple. Open a blank Jupyter notebook. Add a cell and convert it to Markdown (either esc + m) or by using the drop down menu at the top of the notebook. Add your text, equation or image to the cell (images can be added via the edit menu, though some HTML tags may be needed to render and/or ...
Let's start with the basics. Installing RISE is the first step towards creating engaging presentations in Jupyter Notebook. To install RISE, follow these simple steps: Open your terminal, use the following command, and press shift+enter to execute the command: python -m pip install RISE.
In this tutorial, you will learn how to use Jupyter Notebooks to create slide show presentations. This allows you to run and edit live code in your slides. If playback doesn't begin shortly, try restarting your device. Videos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to ...
RISE is a Jupyter Notebook extension that allows you to easily create reveal.js-based presentations from Jupyter Notebook. RISE is a relatively powerful tool with a lot of in-built functionality, which is also simple enough to be used by beginners. That being said, ...
In Jupyter Notebook, each cell can play a part in the presentation slides. To set the role each cell play, you have to change the cell toolbar view by View -> Cell Toolbar -> Slideshow. Changing the view will allow you to set what type of slides you want for each cell. There are 5 Slide Types you can select: Slide.
We could use jupyter notebook itself to create a reveal.js set of slide decks, but exporting first to markdown provides you more control on some aspects of the elements o slide creation. You use the following code to convert a jupyter notebook to a markdown document: ~$ jupyter nbconvert -t markdown mynotebook.ipynb (1) # generates mynotebook ...
Next you will need to run the following command: 1. 1. jupyter nbconvert slideshow.ipynb --to slides --post serve. Running the slideshow. To navigate your slideshow, you can use your left and ...
In Jupyter Notebook we could create slides, like explained here. Here you can see a reproducible example to create Jupyter notebook slides via Anaconda-navigator: ... After assigning slide types to your cells, create an HTML slideshow presentation by opening the integrated terminal and running the command, jupyter nbconvert '<notebook-file-name ...
Jupyter Notebook is ideal for data analysis because it allows you to mix code with explanatory text, making your analysis easy to follow and share with others. You can also run code cells independently, which is great for iterative development and experimentation. Plus, you can easily create visualizations right in the notebook, helping you ...
32. You can display a Jupyter notebook in an active html setting by running : $ jupyter nbconvert untitled.ipynb --to slides --post serve. Is there any ways to run a notebook in the same slideshow format in order to allow for a live presentation/execution of your cells ? slideshow. jupyter.
2. My issue was I tried to enable the "Enter RISE" button. So, to fix my issue, I used conda install -c damianavila82 rise instead of pip install RISE (I normally use pip to install new Python library, anyway). However, if you are looking for a shortcut for the presentation mode you can try "Atl + r" for entering and exiting RISE.
Quantum software, software development, developer experience, Jupyter notebook I Introduction. ... prototyping algorithms or creating interactive presentations. The key enabler of Jupyter is the notebook, an interactive and collaborative document formed by a collection of cells that can contain code, ...