| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added PLL variables (dividers mask/shift, lock enable/detect, etc.)
to new pllinfo struct for each Soc/PLL. PLLA/C/D/E/M/P/U/X.
Used pllinfo struct in all clock functions, validated on T210.
Should be equivalent to prior code on T124/114/30/20. Thanks
to Marcel Ziswiler for corrections to the T20/T30 values.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added 38.4MHz/48MHz entries to pll_x_table for CPU PLL. Needs
to be measured - should be close to 700MHz (1.4G/2).
Note that some freqs aren't in the PLLU table in T210 TRM
(13, 26MHz), so I used the 12MHz table entry for them. They
shouldn't be selected since they're not viable T210 OSC freqs.
Since there are now 2 new OSC defines, all tables (pll_x_table,
PLLU) had to increase by two entries, but since 38.4/48MHz are
not viable osc freqs on T20/30/114, etc, they're just set to 0.
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
|
| |
Add a simple function to enable external clocks.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
|
|
|
| |
Create a function which sets the source clock for a peripheral, given
the number of mux bits to adjust. This can then be used more generally.
For now, don't export it.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
|
|
|
| |
The get_pll() function can do the wrong thing if passed values that are
out of range. Add checks for this and add a function which can return
a 'simple' PLL. This can be defined by SoCs with their own clocks.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Based on the Tegra TRM, the system clock (which is the AVP clock) can
run up to 275MHz. On power on, the default sytem clock source is set to
PLLP_OUT0. In function clock_early_init(), PLLP_OUT0 will be set to
408MHz which is beyond system clock's upper limit.
The fix is to set the system clock to CLK_M before initializing PLLP,
and then switch back to PLLP_OUT4, which has an appropriate divider
configured, after PLLP has been configured
Implement this logic in new function tegra30_set_up_pllp(),
which sets up PLLP and all PLLP_OUT* dividers, and handles the AVP
clock switching. Remove the duplicate PLLP setup from pllx_set_rate()
and adjust_pllp_out_freqs().
Signed-off-by: Jimmy Zhang <jimmzhang@nvidia.com>
[swarren, significantly refactored the change]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since all code that sets or interprets MASK_BITS_* now uses the enums
to define/compare the values, there is no need for MASK_BITS_* to have
a specific integer value. In fact, having a specific integer value may
encourage people to hard-code those values, or interpret the values in
incorrect ways.
As such, remove the logic that assigns a specific value to the enum
values in order to make it completely clear that it's just an enum, not
something that directly represents some integer value.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The only place where the MASK_BITS_* values are used is in
adjust_periph_pll(), which interprets the value 4 (old MASK_BITS_29_28,
new MASK_BITS_31_28) as being associated with mask OUT_CLK_SOURCE4_MASK,
i.e. bits 31:28. Rename the MASK_BITS_ macro to reflect how it's actually
implemented.
Note that no Tegra clock register actually uses all of bits 31:28 as
the mux field. Rather, bits 30:28, 29:28, or 28 are used. However, in
those cases, nothing is stored in the bits above the mux field, so it's
safe to pretend that the mux field extends all the way to the end of the
register. As such, the U-Boot clock driver is currently a bit lazy, and
doesn't distinguish between 31:28, 30:28, 29:28 and 28; it just lumps
them all together and pretends they're all 31:28. This patch doesn't
cause this issue; it was pre-existing. Hopefully, future patches will
clean this up.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The enum used to define the set of register bits used to represent a
clock's input mux, MUX_BITS_*, is defined separately for each SoC at
present. Move this definition to a common location to ease fixing up
some issues with the definition, and the code that uses it.
Signed-off-by: Tom Warren <twarren@nvidia.com>
[swarren, extracted from a larger patch by Tom]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The CPU complex reset masks are not matching with the datasheet for
the CLK_RST_CONTROLLER_RST_CPU_CMPLX_SET/CLR_0 registers. For both T20
and T30 the register consist of groups of 4 bits, with one bit for
each CPU core. On T20 the 2 high bits of each group are always stubbed
as there is only 2 cores.
Signed-off-by: Alban Bedel <alban.bedel@avionic-design.de>
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swrren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
| |
Signed-off-by: Wolfgang Denk <wd@denx.de>
[trini: Fixup common/cmd_io.c]
Signed-off-by: Tom Rini <trini@ti.com>
|
|
|
|
|
|
|
|
|
| |
T114 needs the SYSCTR0 counter initialized so the TSC can be
read by the kernel. Do it in the bootloader since it's a write-once
deal (secure/non-secure mode dependent).
Signed-off-by: Tom Warren <twarren@nvidia.com>
Reviewed-by: Stephen Warren <swarren@nvidia.com>
|
|
|
|
|
|
|
|
|
| |
This 'commonizes' much of the clock/pll code. SoC-dependent code
and tables are left in arch/cpu/tegraXXX-common/clock.c
Some T30 tables needed whitespace fixes due to checkpatch complaints.
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Common Tegra files are in arch-tegra, shared between T20 and T30.
Tegra30-specific headers are in arch-tegra30. Note that some of
these will be filled in as more T30 support is added (drivers,
WB/LP0 support, etc.). A couple of Tegra20 files were changed
to support common headers in arch-tegra, also.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Reviewed-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Simon Glass <sjg@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
| |
Common practice on Tegra 2 boards is to use the pllp_out4 FO
to generate the ULPI reference clock. For this to work we have
to override the default hardware generated output divider.
This function adds a clean way to do so.
Signed-off-by: Lucas Stach <dev@lynxeye.de>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
The move is pretty straight-forward. ap20.h and tegra20.h were renamed to ap.h and tegra.h.
Some files remain in arch-tegra20 but 'include' a file in 'arch-tegra' with #defines & structs
that will be common between T20 and T30 HW. HW-specific #defines, etc. stay in the 'arch-tegra20'
'root' file.
All boards build OK w/MAKEALL -s tegra20. Checkpatch.pl runs clean. Seaboard works OK.
Signed-off-by: Tom Warren <twarren@nvidia.com>
|