What is the TensorFlow machine intelligence platform?

TensorFlow is an open-source software library for calculating numbers using data flow graphs. It was originally founded by the Google Brain Team within the Google Machine Intelligence research organization for machine learning and in-depth research of sensory networks, the system is familiar enough to work in a variety of other fields.

It reached version 1.0 in February 2017 and has continued to grow rapidly, as to date, 21,000+ volunteers, many from outside donors. This article introduces TensorFlow, its open-source community and ecosystem, and highlights models with open source TensorFlow.

TensorFlow is cross-platform. It works on almost everything: GPUs and CPUs — including portable and embedded platforms. They are not widely available yet, but we have just launched an alpha program.

The TensorFlow distribution engine scans away from most supported devices and provides a high-performance C ++ platform for the TensorFlow platform.

On top of that stay Python and C ++ frontend (and more to come). The Layers API provides a simple visual interface for commonly used layers for in-depth learning models. In addition stay state-of-the-art APIs, which include Keras (more on the Kras.io site) and the Estimator API, which makes training and testing of distributed models easier.
And finally, a host of commonly used models are ready for use out of the box, with more to come.

TensorFlow execution model

Graphs

Machine learning can be difficult quickly, and deep learning models can be overwhelming. In most model graphs, you need to be distributed training to be able to replicate over a reasonable period. Also, you will usually want the models you are developing to be used in many forms.

With the current version of TensorFlow, you type code to create a calculation graph and then do it. A graph is a data framework that fully describes the calculation you want to do. This has many benefits:

Also, it can be incorporated into production without relying on any code that forms the graph, only the operating time required to launch it.
It is flexible and well-prepared, as the graph can be modified to produce the best version of a given field. Also, a memory or calculation setting can be made and there is a trade between you. This is helpful, for example, in supporting instant cell phone imaging.
Execution distribution support
TensorFlow’s high-level APIs, in combination with computational graphs, enable a rich and flexible development environment and powerful production capacity in the same framework.

Eager execution

Upcoming additions to TensorFlow make for an exciting, essential TensorFlow writing style. If you enable it, you will be using TensorFlow characters faster, rather than creating graphs that will be used later.

Why is this important? Four main reasons:

  • You can easily check and adjust the values ​​in your graph.
  • You can use Python control flow within TensorFlow APIs — loops, conditions, tasks, closures, etc.
  • Eagerness should make the correction more accurate.
  • Eager’s “define-by-run” semantics will make building and training dynamic graphs easier.

Once you are satisfied with your eagerly used TensorFlow code, you can convert it to a graph automatically. This will make it easier to save, upload, and distribute your graphs.

TensorFlow and the open source software community

TensorFlow was opened primarily to allow the public to improve it with donations. The TensorFlow team has set up procedures for managing download requests, reviews and completed route issues, and responds to Stack Overload and post list queries.

To date, we have more than 890 external collaborators adding code, with everything from small text editing to large extensions such as OS X GPU support or OpenCL implementation. (The TensorFlow GitHub organization has nearly 1,000 non-Google contributors.)

Tensorflow has over 76,000 stars on GitHub, and the number of other repos it uses is growing every month — as of this writing, more than 20,000.

Many of these tutorials are community-created, models, translation, and projects. They can be an excellent source of examples when you start a machine learning project.

Stack Abundance is monitored by the TensorFlow team, and is a great way to get answers to questions (8,000+ answered so far).

The external version of TensorFlow inside is no different than the internal version, with the exception of minor variations. This includes Google’s internal infrastructure interaction (it will not be helpful to anyone), alternatives, and parts that are not yet in good condition. The essence of TensorFlow, however, is the same. Pull-in requests will appear on the outside within a day and a half and vice versa.

And more…

Another useful diagnostic tool is the TensorFlow debugger, tfdbg, which allows you to view the internal structure and conditions for using TensorFlow graphs during training and explanation.

Once you have trained the model you enjoy, the next step is to figure out how to use it to support the model predictions. TensorFlow Serving is a highly efficient supply system for machine-readable models, designed for production areas. It just moved to version 1.0.

There are many other tools and libraries we do not have a place to install here, but see the TensorFlow GitHub org repos to learn more.

The TensorFlow site has many startup guides, examples, and tutorials. (An exciting new lesson is this example of hearing noise.)

Leave a Reply

Your email address will not be published. Required fields are marked *