31: Load CSV data | TensorFlow | Tutorial
The video discusses in TensorFlow: How to use CSV data! 00:00:00 - Overview 00:01:09 - Import libraries 00:02:30 - * * * In memory data: Abalone dataset: pd.read_csv() 00:07:30 - Abalone: Separate features and target 00:08:34 - Abalone: Put features in an array: np.array() 00:09:20 - Abalone: Create a regression model in Keras: tf.keras.Sequential() 00:10:00 - Abalone: Compile model: .compile() 00:10:44 - Abalone: Fit model: .fit() 00:11:50 - * * * Using a preprocessing step 00:12:07 - Normalize: layers.Normalization() 00:12:40 - Normalize: normalize.adapt() 00:13:00 - Create a regression model in Keras: tf.keras.Sequential() 00:13:30 - Compile model: .compile() 00:14:05 - Fit model: .fit() 00:15:13 - * * * Mixed data types: Titanic dataset: pd.read_csv() 00:17:33 - Separate features and target 00:18:38 - Discussion 00:19:14 - Symbolic tensors: create input: tf.keras.Input() 00:20:04 - Symbolic tensors: Calculate using input, Look at result 00:21:00 - Symbolic tensors: tf.keras.Model(inputs=inputs, outputs=result) 00:22:40 - Titanic: features, type of data 00:25:06 - Titanic: create input: tf.keras.Input() 00:26:47 - Titanic: concatenate all numeric inputs 00:29:54 - Titanic: normalize: layers.Normalization() 00:31:25 - Titanic: collect all symbolic preprocessing results 00:32:20 - Titanic: one hot encodestring features: layers.StringLookup(), layers.CategoryEncoding() 00:34:47 - Titanic: combine preprocessed numeric and string inputs 00:36:36 - Titanic: concatenate all inputs: layers.Concatenate(), tf.keras.Model() 00:39:10 - Titanic: plot model: tf.keras.utils.plot_model() 00:40:00 - Titanic: create feature dictionary 00:42:55 - Titanic: preprocess features in the dictionary 00:43:30 - Titanic: define a function to create a model, compile model 00:45:52 - Titanic: use function 00:47:15 - Titanic: .fit() 00:48:30 - Titanic: save model: .save() 00:48:40 - Titanic: reload and compare saved model: tf.kears.models.load_model() 00:50:44 - * * * Using tf.data: on In memory data 00:51:34 - Slices: create a generator function 00:52:45 - Slices: use function using features from Titanic 00:54:02 - Slices: TensorSliceDataset: tf.data.Dataset.from_tensor_slices() 00:56:38 - Slices to Dataset: combine features and labels into a dataset: tf.data.Dataset.from_tensor_slices() 00:58:00 - Dataset: BatchDataset: Shuffle and batch dataset: .shuffle().batch(32) 00:59:04 - Fit model using batch dataset: .fit() 01:00:00 - * * * Data from a single file 01:00:15 - Titanic: Path for CSV file: tf.keras.utils.get_file() 01:01:17 - Titanic: Dataset: tf.data.experimental.make_csv_dataset() 01:02:37 - Titanic: Look into this dataset: .take() 01:04:31 - Traffic: Path for GZIP file: tf.keras.utils.get_file() 01:06:33 - Traffic: Dataset: tf.data.experimental.make_csv_dataset() 01:07:40 - Traffic: Look into this dataset: .take() 01:09:00 - * * * Cache, snapshot dataset 01:09:17 - time: without caching 01:10:30 - time: with caching: .cache().shuffle() 01:11:41 - time: with snapshotting: .snapshot().shuffle() 01:12:53 - * * * Data from multiple files 01:13:24 - Fonts: download data: tf.keras.utils.get_file() 01:15:14 - Fonts: list multiple CSV files in folders: pathlib.Path('fonts').glob('*.csv') 01:16:50 - Fonts: create dataset from 153 CSV files: tf.data.experimental.make_csv_dataset() 01:17:52 - Fonts: look into the dataset 01:21:10 - Fonts: create function to reshape pixel values 01:24:55 - Fonts: map function to dataset: .map() 01:26:50 - Fonts: plot images 01:28:30 - * * * Lower level functions 01:29:44 - Data: Titanic: pathlib.Path().read_text() 01:30:42 - Data: split string of lines into a list of lines: text.split() 01:31:30 - Data: Type: placeholder: str() 01:32:20 - Data: Type: set as string: tf.io.decode_csv() 01:34:00 - Data: Type: placeholder: str(), float(), int() 01:34:52 - Data: Type: set as string: tf.io.decode_csv() 01:35:37 - CsvDataset: tf.data.experimental.CsvDataset() 01:37:37 - CsvDataset: alternative code 01:39:37 - Multiple files: pathlib.Path().read_text().splitlines() 01:42:48 - Interleave multiple files: tf.data.Dataset.list_files() 01:45:13 - Interleave: InterleaveDataset: .interleave() 01:46:17 - Interleave: access the values from dataset 01:48:55 - * * * Performance 01:49:20 - Dataset: tf.data.experimental.make_csv_dataset() 01:50:24 - Dataset: time 01:51:36 - Dataset: .interleave(), tf.data.TextLineDataset() 01:52:42 - Dataset: .map(), tf.io.decode_csv() 01:53:50 - Dataset: MapDataset 01:54:20 - Dataset: time 01:55:11 - Ending notes # ---------------- # TensorFlow Guide # ---------------- https://www.tensorflow.org/tutorials/load_data/csv
Download
0 formatsNo download links available.