主要ref:苹果开发者官网 - tensorflow配置 重点是注意tensorflow-metal、tensorflow和python的版本!

tensorflow-metal的pypi仓库

tensorflowtensorflow-metalMacOsfeaturesv2.5v0.1.212.0+Pluggable devicev2.6v0.2.012.0+Variable seq. length RNNv2.7v0.3.012.0+Custom op supportv2.8v0.4.012.0+RNN perf. improvementsv2.9v0.5.012.1+Distributed trainingv2.10v0.6.012.1+v2.11v0.7.012.1+v2.12v0.8.012.1+v2.13v1.0.012.1+FP16 and BF16 supportv2.14v1.1.012.1+

tensorflow和python对应版本(后面测试我高版本python3.11也可以)

安装conda(省略,也可以不使用conda)conda查看可以安装的python版本conda search python

如果没有想要的太低版本,需要添加conda源啥的;我Macbook M3 pro测试了python3.11 + tf 2.14 + tf-metal v.1.1.0是可以运行的 conda创造指定python版本的环境:conda create -n tf-metal python=3.11.8conda选择刚创建的环境:conda activate tf-metalpip下载tf:pip install tensorflow==2.14pip下载tf-metal:pip install tensorflow-metal==1.1.0下载jupyter notebook(或者直接跳到step10运行python代码):conda install jupyter notebook运行jupyter notebook:jupyter notebook

如果运行失败显示了下面的错误,安装chardet:pip install chardet

Traceback (most recent call last): File “/opt/miniconda3/envs/tf-metal/lib/python3.11/site-packages/requests/compat.py”, line 11, in import chardet ModuleNotFoundError: No module named ‘chardet’

在jupyter notebook(或者直接python脚本)进行测试:

显示信息 ref: github参考 import numpy as np

import pandas as pd

import tensorflow as tf

# Check for TensorFlow GPU access

print(f"TensorFlow has access to the following devices:\n{tf.config.list_physical_devices()}")

# See TensorFlow version

print(f"TensorFlow version: {tf.__version__}")

训练测试 import tensorflow as tf

cifar = tf.keras.datasets.cifar100

(x_train, y_train), (x_test, y_test) = cifar.load_data()

model = tf.keras.applications.ResNet50(

include_top=True,

weights=None,

input_shape=(32, 32, 3),

classes=100,)

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)

model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"])

model.fit(x_train, y_train, epochs=5, batch_size=64)

⚠️提示了:Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. The NUMA error message for an Apple Silicon computer is benign and can be ignored. Apple silicon memory is UMA (unified memory architecture) not NUMA. — from 苹果开发者论坛讨论帖 依旧担心上面的提示?使用矩阵乘法对比gpu和cpu import tensorflow as tf

import time

# 创建一个随机的大型矩阵

matrix_size = 10000

matrix_a = tf.random.normal((matrix_size, matrix_size))

matrix_b = tf.random.normal((matrix_size, matrix_size))

# 使用CPU执行矩阵乘法

with tf.device('/CPU:0'):

start_time = time.time()

result_cpu = tf.matmul(matrix_a, matrix_b)

end_time = time.time()

print("使用CPU执行矩阵乘法所需时间:", end_time - start_time, "秒")

# 使用GPU执行矩阵乘法

with tf.device('/GPU:0'):

start_time = time.time()

result_gpu = tf.matmul(matrix_a, matrix_b)

end_time = time.time()

print("使用GPU执行矩阵乘法所需时间:", end_time - start_time, "秒")

 结果对比超明显

文章链接

评论可见,请评论后查看内容,谢谢!!!
 您阅读本篇文章共花了: